forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
5RZoYIT3u6
You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
[ "Ayan Sengupta", "Siddhant Chaudhary", "Tanmoy Chakraborty" ]
The ever-increasing size of large language models (LLMs) presents significant challenges for deployment due to their heavy computational and memory requirements. Current model pruning techniques attempt to alleviate these issues by relying heavily on external calibration datasets to determine which parameters to prune or compress, thus limiting their flexibility and scalability across different compression ratios. Moreover, these methods often cause severe performance degradation, particularly in downstream tasks, when subjected to higher compression rates. In this paper, we propose *PruneNet*, a novel model compression method that addresses these limitations by reformulating model pruning as a policy learning process. PruneNet decouples the pruning process from the model architecture, eliminating the need for calibration datasets. It learns a stochastic pruning policy to assess parameter importance solely based on intrinsic model properties while preserving the spectral structure to minimize information loss. PruneNet can compress the LLaMA-2-7B model in just 15 minutes, achieving over 80\% retention of its zero-shot performance with a 30\% compression ratio, outperforming existing methods that retain only 75\% performance. Furthermore, on complex multitask language understanding tasks, PruneNet demonstrates its robustness by preserving up to 80\% performance of the original model, proving itself a superior alternative to conventional structured compression techniques.
[ "Model Compression", "Large Language Models", "Structured Pruning" ]
Accept (Poster)
https://openreview.net/pdf?id=5RZoYIT3u6
https://openreview.net/forum?id=5RZoYIT3u6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xWSrGCPpb9", "vGVX3guYJ4", "uGf6JNPO9o", "sAZNcfjiJp", "rlFXuxGyFY", "qeckXggy0A", "ncAZdqYOve", "miHjoVYis6", "lQUyDFumMg", "kHKnF2cRbe", "i5vGS0qo3S", "gQwQOOEyP6", "gG1zU21h0p", "f4fRMGQyKa", "cXpSbnvB5F", "YjivDeUdQY", "XLNPL6Y2FJ", "VjOX0LaTUr", "THl65HTTx1", "PGKZcsE9t5", "PGHzJHqfAV", "ORZxItNSM2", "OCGQgHy9dm", "NOM7LtE6Po", "NMpl7279S2", "MWVDSHXWK1", "KvrLZRXM4I", "KBNPMvbN5j", "GDdOGEmTGJ", "G6ZxLPyYGN", "FUCXYsvGmZ", "EkOiRf6v1P", "ET4nxPCE8U", "E4q1n6tpZk", "CXbYCrUGHE", "C50O3FMADY", "BPG50Ap83P", "A5Qur96yYW", "9ouEXNap6i", "4tt0k0EXMd", "3vzXlwOWuD", "3apClkMPeL", "2ei0Su4psP", "2ehcf698SL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732249040062, 1732248472259, 1732984580381, 1732550579752, 1733222732611, 1730622310338, 1732032621097, 1730582586308, 1732708485727, 1732534892522, 1732550218531, 1732033424290, 1730234514554, 1732605668747, 1732462503346, 1732534820623, 1732462541191, 1732362271073, 1732116268603, 1732362358400, 1737524271657, 1732494347732, 1732032450268, 1733061260015, 1732034154558, 1732031889352, 1732805502194, 1732248803720, 1733141436515, 1730475783579, 1732254958890, 1732722924851, 1732252427739, 1733195963459, 1732034444806, 1732032097605, 1732217750629, 1733188815295, 1732033093656, 1732249067813, 1732033701560, 1732034826665, 1734052468408, 1732180536813 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_ZGwT" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_bk7b" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_onBH" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_ZGwT" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_bk7b" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_ooUi" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_ooUi" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_onBH" ], [ "ICLR.cc/2025/Conference/Submission13607/Reviewer_bk7b" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ], [ "ICLR.cc/2025/Conference/Submission13607/Area_Chair_tz1v" ], [ "ICLR.cc/2025/Conference/Submission13607/Authors" ] ], "structured_content_str": [ "{\"title\": \"A gentle reminder to check our responses\", \"comment\": \"Dear reviewer bk7b,\\n\\nWe have addressed all the concerns you raised with additional empirical evidence. We request you to kindly review our responses and let us know if you have further questions.\\n\\nWe will make our best effort to clarify your doubts and concerns.\\n\\nThanks,\"}", "{\"title\": \"Acknowleding Reviewer onBH's Comment\", \"comment\": \"Dear reviewer onBH,\\n\\nWe thank you for reassessing our paper and increasing the soundness score. We would\\u00a0greatly appreciate your guidance on any additional concerns you may have about improving the **overall rating of our paper**. \\n\\nWe are eager to hear any additional recommendations for increasing the impact of our work.\\n\\nThanks,\"}", "{\"title\": \"Requesing again to check our responses\", \"comment\": \"Dear reviewer bk7b,\\n\\nWe have addressed your recent concerns with new sets of experiments. We sincerely request you check our responses. If you feel our responses address your questions, please consider reassessing our paper. We are ready to address other concerns if any.\\n\\nThanks\"}", "{\"title\": \"Please check our responses to your followup questions\", \"comment\": \"Dear Reviewer bk7b,\\n\\nThank you very much for your followup questions. We have addressed your recent concerns with new sets of experiments. Since the discussion is ending tomorrow, we request you to have a look at our recent responses and let me know if you have further concerns. If you feel our responses address your questions, please consider reassessing our paper.\\n\\nLooking forward to more followup discussion.\\n\\nThanks\"}", "{\"title\": \"Official Comments by Authors\", \"comment\": \"We sincerely thank all the reviewers for their valuable feedback during the discussion period and their comprehensive review of our work. We appreciate their recognition of the contributions of our work,\\n\\n## The first calibration-free structured pruning technique\\n\\nOur proposed method, PruneNet, is the *first* structured model pruning technique that does not need any calibration dataset for compression, wherein a data-independent importance scorer is used to prune the layers of an LLM. **All reviewers highlighted the importance of such calibration-free pruning methods**, particularly in data-private and low-resource settings, and a thorough discussion of the motivation behind such methods (**particularly with reviewer bk7b**) reaffirms the key contribution of our work. Completely removing calibration datasets also makes PruneNet a highly reusable and efficient pruning technique, as it bypasses the need to run expensive forward/backward passes of LLMs through the calibration data.\\n\\n## Efficiency and transferability of the proposed method\\n\\nWith PruneNet, we can significantly reduce the model compression time by 50%. This efficiency, as highlighted in the paper, is a key advantage of PruneNet. It can compress an LLM at different ratios at once without retraining the policy learner, a feature that is particularly useful in practical applications. This adaptability to different edge device resources **has been acknowledged by both reviewers bk7b and ooUi**, further enhancing the method's practicality.\\n\\n## Extensive comparison with baselines on a wide range of models and compression ratios\\n\\nWe have presented an extensive comparison of PruneNet with other competitive model pruning techniques, notably SliceGPT, in a variety of set-ups. This comparison, which includes scenarios with/without recovery fine-tuning (RFT) and using different calibration datasets (WikiText2, Alpaca), provides a comprehensive view of PruneNet's performance. We also compared the performance for a variety of model architectures, with the total number of parameters ranging from 125M to 13B. Such an extensive comparison showcases the robustness of our proposed method, even when baselines are calibrated with instruction-based calibration datasets such as Alpaca. \\n\\n**All the reviewers (ZGwT, bk7b, ooUI, onBH) have acknowledged the results and thoroughness of our experiments**, providing positive feedback that reaffirms the quality and significance of our work.\\n\\n## Robustness of PruneNet\\u00a0\\n\\nWe've also presented a detailed study of various components of the design choices in PruneNet. This includes the choice\\nof reward functions used in the policy learning process, the usage of FFN1 vs FFN2 layers for sampling, a comparison of a policy-based\\npruner vs a static pruning method such as random sampling of indices and a comparison of stochastic and deterministic policies to be used for pruning. The results of these experiments motivate the design choices we made and highlight the robustness of PruneNet in various setups.\\n\\n## Clarity of presentation\\n\\n**Reviewer onBH commended the clarity of presentation in our paper.**\\n\\nWe've also thoroughly responded to every follow-up question for each reviewer. Here is a summary of all the responses:\\n\\n* In response to reviewer ZGwT's follow-up questions, we presented an ablation study comparing PruneNet's performance against a simple random sampling approach (with PruneNet performing better). We highlighted the importance of sampling in our overall design (against simply choosing the top-k most important rows of a matrix).\\n\\n* For reviewer bk7b, we presented a comparison of using different calibration datasets (WikiText2, Alpaca) with SliceGPT against PruneNet. While SliceGPT's performance is marginally better than PruneNet when it uses the Alpaca dataset for the LLaMa-2-7B model, the trend was the opposite for the Phi-2 model, exposing the inherent limitation of calibration-based pruning techniques (namely their sensitivity to the choice of calibration data).\\n\\n* For reviewer ooUI, we presented the results of PruneNet with recovery fine-tuning. We conducted additional ablation experiments to highlight the importance of each design choice in PruneNet.\\n\\n* For reviewer onBH, we presented a few more ablation results (FFN1 vs FFN2 layers, RFT), results of an implementation of PruneNet for matrices in the attention layer,\\u00a0 and the results of PruneNet for compression ratios larger than 40%.\\n\\nBased on our responses, we observed an overall positive change in the scores:\\n\\n* Reviewer onBH improved the soundness score of our paper from 3 to 4.\\n* Reviewers ooUI and bk7b improved the overall rating of our paper from 5 to 6.\\n* While reviewer ZGwT acknowledged that all of their concerns were addressed in our rebuttal, they maintained their original rating of 6.\\n\\nWe hope that our work motivates further research in developing more efficient model compression techniques in low-resource settings and makes a good contribution to the field.\"}", "{\"summary\": \"This paper introduces PruneNet, a policy-based pruning method designed to learn the removal of redundant parameters without the need for calibration data. PruneNet\\u2019s policy is trained based on intrinsic model properties to minimize information loss, quantified using the KS distance between the singular value distributions of the uncompressed and compressed matrices. To make policy learning differentiable, the paper employs the reparameterization trick, introducing a random variable into the sampling process and training the policy module by evaluating various pruning outcomes. The proposed method is primarily evaluated on Llama-2 7B and Phi-2, with comparisons to SliceGPT, a technique that directly compresses parameter matrices by training on datasets. Results indicate that PruneNet achieves superior performance on several popular benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"One of the key strengths of this paper is a data-independent importance scorer, derived through a policy-learning process. The policy focuses mainly on individual linear layers, making the method theoretically efficient. This efficiency is demonstrated by its successful application to Llama-7B, where pruning requires only 15 minutes, illustrating its practicality for large-scale models.\", \"The authors conducted extensive experiments, exploring factors such as compression ratio, RFT dataset, and computational costs.\", \"The policy learner can be applied across different compression ratios, which offers practical value for determining optimal pruning ratios in large language models (LLMs). This flexibility could be particularly advantageous for users looking to fine-tune pruning ratios without retraining the policy for each setting.\"], \"weaknesses\": [\"The pruning approach is focused on single layers, potentially overlooking structured interdependencies with subsequent layers. In structured pruning, parameters in one layer impact those in the next; by not accounting for this, the proposed method risks removing essential parameters from interconnected layers. This oversight could inadvertently affect model performance, as it might miss opportunities for more coordinated pruning across layers.\", \"Although the method is designed to optimize the KS distance before and after pruning, it lacks a detailed analysis of its policy-learning approach. A simple random sampling approach might also work: sampling multiple randomly pruned matrices and selecting the one with the lowest KS distance might be able to provide good results. Including more ablation studies to compare these methods would enrich the analysis. Additionally, further discussion on the design and benefits of the policy approach would add valuable insights.\", \"While the method shows promise on Llama-7B, its efficacy on larger models, such as Llama 13B, remains untested. An evaluation on larger LLMs would help establish its robustness and scalability, further highlighting its applicability to a broader range of large models.\"], \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer bk7b Comments - Part 2\", \"comment\": \"**Justification of spectrum matching and comparison with other reward criterion.** It is important to understand that pruning reduces the cardinality of the spectrum. Therefore, the traditional pointwise distance measures (like Euclidean/Cosine/Frobenius norm) are not applicable for comparing the spectrum structure of uncompressed and compressed models. Therefore, we resort to probabilistic distance measures that can capture the shift in the empirical distribution of spectrums of the compressed model. Corollary 3.3 suggests that compression reduces the spectrum range, and Figure 1 highlights that with more compression, the spectrum becomes more right-skewed. To further understand the effectiveness of PruneNet under different distance measures, we evaluate the LLaMA-2-7B model compressed using PruneNet with non-parametric Anderson\\u2013Darling measure of agreement.\\n\\nTable 2 (Table 18 of the updated paper) highlights the effectiveness of PruneNet with both Kolmogorov-Smirnov (highlighted as KS) and Anderson\\u2013Darling (highlighted as AD) distance measures. Under both reward functions, we observe a similar performance of PruneNet for different compression ratios with the LLaMA-2-7B model. The results further emphasize the stability of our proposed compression method under different choice metrics.\\n\\n### Table 2. Zero-shot performance of Llama-2-7B compressed with PruneNet with different reward functions.\\n\\n| Compression Ratio | Reward Function | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | KS | 75.30 | 65.50 | 66.43 | 63.80 | 37.20 | 61.66 |\\n| 20% | AD | 73.01 | 63.30 | 65.70 | 60.40 | 37.46 | 59.97 |\\n| 25% | KS | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | AD | 73.88 | 61.17 | 63.98 | 61.62 | 35.84 | 59.30 |\\n| 30% | KS | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | AD | 72.13 | 61.88 | 60.18 | 58.00 | 33.62 | 57.16 |\\n\\n**References**\\n\\n[1] Williams, Miles, and Nikolaos Aletras. \\\"On the impact of calibration data in post-training quantization and pruning.\\\" In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 10100-10118. 2024.\"}", "{\"summary\": \"This paper proposes a FFN-layer pruning algorithm for large language models that does not require a calibration dataset. The idea is to prune the weight matrices so that the spectrum of the matrices before and after pruning remains similar in the Kolmogorov-Smirnoff distance. To hasten the pruning procedure, one trains a policy that predicts the importance scores of the first layer of FFN. The empirical results show that the proposed method outperforms SliceGPT, in terms of prediction quality and the actual speedup.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The central research question of training a policy to prune LLMs is quite intriguing, with some practical importance. Although the paper mainly claims the benefit on the data side, I personally find that the method might have more future uses on low-memory or compute scenarios where it is very difficult to run usual pruning algorithms that require much gradient-based computations.\", \"The empirical performance seems to be reasonably good.\", \"The paper asks many interesting questions in section 5.2, including the transferability and the layerwise compression ratio.\"], \"weaknesses\": [\"One nitpick is that the comparison against SliceGPT, reported on Table 2, can be quite misleading. The figures seems to be the one for SliceGPT with WikiText2 calibration data, which is much worse than the SliceGPT with Alpaca calibration data. In fact, with Alpaca, I believe that the SliceGPT works much better than the proposed PruneNet. Of course, this is somewhat expected because it utilizes more data than PruneNet; making a comparison with both WikiText and Alpaca case does not diminish the usefulness of the proposed method, as they assume different amount of usable resources. Thus I recommend including both SliceGPTs, for a more comprehensive comparison.\", \"Another weakness is that the proposed method may not work better than the methods that use calibration data, as assuming no access to the calibration data also makes it impossible to use RFT. It seems like Table 3, ironically, can also be interpreted as a limitation of the proposed method, in a sense that the model pruned with PruneNet cannot recover much with RFT, while LLMs pruned with other methods can recover much with fine-tuning.\", \"Also, the motivation is quite unclear to me. At least for the text data, I am not sure why it is practical to assume no access to the calibration data. We already have quite abundant text data (crawled from web), even as our benchmark. Why should we suddenly assume that those data are not available? Can you give any further justifications?\", \"The idea of spectrum matching is also not justified well. First, I do not see how corollary 3.3 can be used to predict that the singular values of the matrix become more right-skewed. Doesn't the corollary simply states that the spectrum will be about subsampling, rather than predicting anything about the skewness? Also, the fact that there can be some spectrum shift does not fully mean that it will be an effective criterion for deciding how to prune. Perhaps more comparison with other criterion (e.g., minimizing Frobenius norm distortion) may help us understand whether the spectrum matching part is indeed an essential and nontrivial component.\"], \"questions\": \"Discussed in 'weaknesses'\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Sincere request to check our recent responses\", \"comment\": \"Dear reviewer bk7b,\\n\\nWe have addressed your recent concerns with new sets of experiments. Since the discussion period is ending soon, we request you to have a look at our recent responses and let us know if you have further concerns. Your feedback is valuable for us to address the potential weaknesses and improving the quality of our work.\\n\\nIf you feel our responses address your questions, please consider reassessing our paper.\\n\\nThanks\"}", "{\"title\": \"Response to Reviewer bk7b Comments - Part II\", \"comment\": \"**On spectrum matching.** In Figure 1 of our paper, we highlighted how slicing impacts the spectrum structure of an LLM. In Corollary 3.3, we formalized this fact using the Poincare separation theorem. To understand the importance of using a learned policy instead of a static method such as random slicing of a weight matrix, we further carried out the following experiment: we randomly pick indices to slice off for each layer and report the standard deviation and mean of the obtained rewards. In Table 2, we observe a high standard deviation of rewards for smaller compression ratios, thereby requiring a mechanism that **learns** the indices to be pruned off.\\n\\n| Sparsity | Mean Reward (Layer 1) | STD Reward (Layer 1) | Mean Reward (Layer 32) | STD Reward (Layer 32) |\\n| -------- | ------------ | ----------- | ------------- | ------------ |\\n| 5% | 30.9 | 1.04 | 31.22 | 0.31 |\\n| 10% | 16.2 | 0.39 | 15.6 | 0.18 |\\n| 20% | 8.2 | 0.07 | 7.7 | 0.03 |\\n| 25% | 6.6 | 0.06 | 6 | 0.04 |\\n| 40% | 4.2 | 0.05 | 3.6 | 0.01 |\\n| 55% | 3.1 | 0.02 | 2.5 | 0.01 |\\n| 70% | 2.3 | 0.02 | 1.8 | 0 |\\n\\nTherefore, Corollary 3.3 and the above experiment suggest developing a learnable strategy to minimize the difference between uncompressed and compressed model spectrums for retaining information in the post-compressed LLM.\\n\\n**References**\\n\\n[1] Ashkboos, Saleh, Maximilian L. Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hensman. \\\"Slicegpt: Compress large language models by deleting rows and columns.\\\"\\u00a0arXiv preprint arXiv:2401.15024\\u00a0(2024).\\n\\n[2] Lin, Bin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, and Li Yuan. \\\"Video-llava: Learning united visual representation by alignment before projection.\\\"\\u00a0arXiv preprint arXiv:2311.10122\\u00a0(2023).\"}", "{\"title\": \"Sincere request to check our responses at least once before the discussion ends tomorrow\", \"comment\": \"Dear Reviewer ZGwT,\\n\\nWe have been trying to reach out to you with a request to check our responses to your comments. We have meticulously addressed all your concerns with new sets of experiments. We are sure our responses will address your comments.\\nSince the discussion period will end tomorrow, it is our sincere request to you to check our responses at least once and reassess our paper.\\n\\nSincerely look forward to hearing from you.\\n\\nThanks\"}", "{\"title\": \"Response to Reviewer ooUi Comments - Part 2\", \"comment\": \"**Efficiency of policy learning.** The primary motivation behind policy learning is to decouple the model pruning process from the model itself. Therefore, with an appropriate policy learner model, we can learn which parameters can be compressed within an arbitrary model component just by looking at the model parameters without posing any additional constraint on the component architecture. This allows us to flexibly use the same policy learner model for compressing different layers and components of an LLM.\\n\\nWe highlight the results with random policy in Table 3 (Table 16 of the updated paper), where we select the pruned indices randomly for each model parameter. We refer to this as a 'random' selection. We observe an average $2$% drop with a random selection method, demonstrating the need for a learnable policy for effective model compression.\\n\\n### Table 3. Effect of learnable policy on compressed Llama-2-7B model. \\n\\n| Compression Ratio | Selection | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | Policy-based | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | Random | 72.36 | 63.14 | 61.18 | 60.31 | 36.52 | 58.70 |\\n| 25% | Policy-based | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | Random | 70.13 | 60.38 | 58.38 | 56.27 | 35.67 | 56.20 |\\n| 30% | Policy-based | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | Random | 73.13 | 59.35 | 55.15 | 49.83 | 31.66 | 53.90 |\\n\\nTo further understand the effectiveness of the learned policy empirically, we perform experiments where we learn deterministic policy based on the parameter importance calculated in Equation 3 of the paper. In the deterministic policy, we chose only the topk parameters based on this computed importance metric, making the selection deterministic. Table 4 (Table 17 of the updated paper) highlights the results of LLaMA-2-7B with deterministic and stochastic policies (policy learned with PruneNet defined in Equation 5 of the paper). We observe that deterministic policy often underperforms the stochastic policy with an average margin of $2$%. The results highlight that parameter importance alone cannot determine which parameters to compress. Preserving the spectral structure between the compressed and uncompressed models is critical to ensure minimal performance drop post-compression.\\n\\n### Table 4. Effect of stochastic policy on compressed Llama-2-7B model.\\n\\n| Compression Ratio | Policy | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | Stochastic | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | Deterministic | 72.91 | 61.64 | 61.05 | 56.69 | 36.52 | 57.76 |\\n| 25% | Stochastic | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | Deterministic | 69.97 | 59.27 | 59.39 | 56.69 | 33.53 | 55.77 |\\n| 30% | Stochastic | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | Deterministic | 69.64 | 58.25 | 54.45 | 54.97 | 31.23 | 53.71 |\\n\\n**Computational budget.** The policy learner model is typically $1000$ times smaller than the LLM to be compressed (number of learnable parameters $<7$M). As Section 1 of the paper highlights, most existing structured pruning methods learn the compressed model after being calibrated on some dataset. Therefore, the computational budget utilized by the policy learner model is far less than the existing baselines, making our compression method more scalable and efficient.\\n\\n**Pseudo code.** We thank the reviewer for the suggestion. We have added the pseudo-code of the proposed compression method in Algorithm 1 of the updated paper.\"}", "{\"summary\": \"This paper describes a technique -- PruneNet that avoids the usage of calibration data for structured pruning. They instead reformulate it as a policy learning process via (prun)ing-object(e)d (net)work. They does this by first making a corollary that the range of eigenvalues of the compressed model weights is a subset of the original model weights (corollary 3.3). Following that, the key idea is to identify (using an MLP/policy learner) and remove rows/columns in such a way that the distribution (and distance using Kolmogorov Smirnoff (KS) Distance) of the eigenvalues for original and compressed weights remain the same.\\n\\nThe method seems to achieve robust results across various models/datasets and compression ratios without a calibration dataset.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The removal of dependence on calibration dataset while pruning. As noted by authors in L115, calibration dataset has an impact on performance of models, so it has important effects for applications.\\n2. The intrinsic model compression is an interesting observation. This helps practitioners understand the dependency of compression methods (whether method X is adding parameters or not) with respect to the compression ratios\\n3. The reusability of policy learner i.e train a learner (MLP) for one compression ratio and use it for another ratio as the authors have noted down is a huge strength (L412)\\n4. Very well written with clear presentation.\\n\\nI believe the paper would be a great addition with some clarifications needed in design space and experiments.\", \"weaknesses\": \"I don't have a list of \\\"weakness\\\" i.e substantial missing components. Please refer the questions section for potential ideas and clarifications needed.\", \"questions\": \"***Model design***\\n\\n1. The sampling is done on FFN1 and complementary columns are chosen in FFN2 to match the dimensions. This makes sense logically/intuitively but did the authors experiment with sampling FFN2 separately and/or any other design choices. Does the indices match the intuitive selection. Or does the authors have an explanation on why this choice is better compared to other intuitive selections? \\n2. Did the authors perform experiments on usage of other penalty functions (KS)? That can be a good ablation study to have. \\n3. L292: The assumption of discounted penalty works if the LLM has higher singular values at later layers. Has there been any (previous) analysis on existing LLMs whether that\\u2019s the scenario? A reference would be great if it exists. If not, having plots similar to Figure 1 for other LLMs at appendix would be great. \\n\\n***Experiments***\\n\\n4. L361: Are the results for the one-sided test and additional details available? A reference would be great.\\n5. L375 (RFT): Can the authors add more insights on why RFT on model X might reduce the performance?\\n6. L375 (RFT): Can the RFT be done on SliceGPT as well (i.e Table 3 looking similar to Table 2). That will help to understand whether RFT helps SliceGPT more (>1.5%) and PruneNet less (marginal 1.5%) or less for both methods (in either case, it will be a useful finding to understand the effects of RFT)\\n\\n***Suggestion for additional experiment on design choice***\\n\\n1. Can this policy be extended to attention matrices as well? From my understanding, the key idea seems to be applicable on any matrix, so this experiment will be super helpful to understand the design space more clearly? There has been some studies [1] done to understand the effects of compression on different modules in a transformer, so the authors might take some inspiration to understand the effects on compressing on different modules.\\n2. Can the authors perform an experiment with >40% compression ratio? The performance drops most likely with higher compression ratios [1, 2, 3] but it would be great to know how much the PruneNet can achieve for reasonable compression. \\n\\n***Possible relevant citation***\\n1. The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models - https://arxiv.org/abs/2312.00960\\n2. Are sixteen heads really better than one? - https://arxiv.org/abs/1905.10650\\n3. Compressing bert: Studying the effects of weight pruning on transfer learning - https://arxiv.org/abs/2002.08307\\n\\n***Format***\\n\\n1. Line 37 - Can be more descriptive about the model sizes rather than a simple statement (eg: Llama 2 can be of different sizes)\\n2. Can the authors give clear explanations on different between Effective Sparsity and Sparsity (maybe in Appendix if possible)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response and experimental results. They addressed my concerns about the weaknesses. I will keep my score at 6.\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear reviewer ZGwT,\\n\\nThe discussion period is ending very soon. We are yet to receive any feedback from you regarding our responses. We tried our best to address all your comments with additional experimental results. This is our sincere request to you to kindly check our responses and consider reassessing our paper.\\n\\nWe look forward to your response.\\n\\nThanks\"}", "{\"title\": \"Response to Reviewer bk7b Comments - Part I\", \"comment\": \"We thank reviewer bk7b for the comments. We address the concerns below -\\n\\n**Discrepancy in the tables.** The discrepancy is in SliceGPT's original paper (Askboos et al. 2024), where they computed the average score incorrectly for the 30% sparsity ratio. We report the correct number in the above response.\\n\\nWe have updated the Alpaca results in the revised paper.\\n\\n**RFT.** We argue that, unlike other compression methods like SliceGPT, RFT is optional for recovering models compressed with PruneNet. As PruneNet preserves the internal knowledge of an LLM through spectral matching, PruneNet is less influenced by RFT. Nevertheless, results in Table 1 show that the post-RFT performance of PruneNet is still better than SliceGPT's. \\n\\n### Table 1. Results with RFT on WikiText2 on Llama-2-7B\\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| 20% | SliceGPT | 69.86 | 64.72 | 61.07 | 54.25 | 36.73 | 57.27 |\\n| 20% | PruneNet | 74.76 | 66.22 | 69.38 | 65.61 | 39.25 | 63.04 |\\n| 25% | SliceGPT | 69.26 | 64.96 | 58.65 | 52.36 | 35.75 | 56.20 |\\n| 25% | PruneNet | 74.37 | 66.46 | 65.71 | 60.82 | 36.60 | 60.79 |\\n| 30% | SliceGPT | 67.41 | 63.22 | 55.65 | 50.76 | 34.13 | 54.23 |\\n| 30% | PruneNet | 73.01 | 63.46 | 63.21 | 60.14 | 35.92 | 59.15 |\\n\\n**On calibration data.** We emphasize that a careful selection of calibration datasets, which may require extensive experimentation, is a significant limitation of existing compression methods and may restrict their applicability in many applications. For instance, consider a scenario where an LLM has been trained on private data for domain-specific applications such as healthcare. Finding good-quality calibration datasets might be monetarily and legally demanding in such a situation. Moreover, evaluating the post-compression performance of LLMs to choose the correct calibration could also be infeasible in those scenarios.\\n\\nAnother point that motivated us to pursue data-free pruning methods was that calibration-based pruning methods perform computation in the original model space, which requires loading the calibration data in memory and running it through the model. For instance - SliceGPT performs rotation operations which are performed on CPUs (reference - https://github.com/microsoft/TransformerCompression/blob/main/src/slicegpt/rotate.py) to prevent out-of-GPU-memory errors. Due to these computational challenges, SliceGPT requires a significantly longer time to compress LLMs than PruneNet (highlighted in Section 5.3 of the paper). \\n \\nOn the other hand, RFT is often performed with parameter-efficient fine-tuning strategies such as LoRA. Therefore, fine-tuning a compressed LLM is computationally more viable than calibration. PruneNet outperforms SliceGPT both with and without RFT, demonstrating its effectiveness without introducing significant computational expense.\\n\\n**Example of multi-modal calibration.** Compressing a multi-modal LLM like Video-LLaVA-7B (Lin et al., 2023) using traditional methods might require good-quality video calibration data. PruneNet offers greater flexibility by omitting the need for calibration. Therefore, PruneNet can easily compress this model without spending any extra effort collecting a video dataset.\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear reviewer bk7b,\\n\\nThe discussion period is ending very soon. We are yet to receive any feedback from you regarding our responses. We tried our best to address all your comments with additional experimental results. This is our sincere request to you to kindly check our responses and consider reassessing our paper.\\n\\nWe look forward to your response.\\n\\nThanks\"}", "{\"title\": \"Another reminder to read our response\", \"comment\": \"Dear reviewer ZGwT,\\n\\nThe discussion period ends very soon. It is our sincere request to read our responses. We tried our best to address all your comments with additional experimental results. Kindly let us know if you have any further questions. If our responses address your concerns, kindly consider reassessing our paper.\\n\\nWe look forward to your response.\\n\\nThanks\"}", "{\"title\": \"Confirmation of Rebuttal Submission\", \"comment\": \"Dear Reviewers,\\n\\nWe appreciate your effort to review our paper meticulously and provide us with your valuable suggestions. We have addressed all your comments and provided the additional results you suggested. We have also updated the paper (changes are highlighted in blue color) to accommodate these changes. We request you to kindly go through our responses and let us resolve any other query that you may have.\\n\\nThanks,\"}", "{\"title\": \"Please read our responses\", \"comment\": \"Dear reviewer bk7b,\\n\\nThe discussion period ends very soon. This is our sincere request to read our responses. We tried our best to address all your comments with additional experimental results. Kindly let us know if you have any further questions. If our responses address your concerns, kindly consider reassessing our paper.\\n\\nWe look forward to your response.\\n\\nThanks\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the detailed response. I have just read through your comments, and came up with several follow-up questions.\\n\\n**On Alpaca.** Thank you for sharing an interesting result. The outperformance of the proposed method on Phi-2 is in fact quite interesting. Yet, I do observe that there is some discrepancy between the performance of SliceGPT on Phi-2 you reported and the ones in the Table 8 of the original paper. Why is that?\\n\\nAlso, there is nothing wrong if data-free method to slightly underperform a method that utilizes data extensively; it does not undermine quality of the paper (or make me appreciate this paper less). My point is that a comprehensive and informative comparison is always better; thus not including them in the paper is indeed something that hurts the quality of the paper.\\n\\n**On RFT.** Would there be any thoughts on this point?\\n\\n**On calibration data.** Thank you for the pointer to Williams and Aletras (2024). I do agree to the point on the reusability. Regarding the sensitivity, it is not clear to me why this should be the good argument for not using the calibration data at all; wouldn't a careful choice of calibration data (based on many experiments) be the obvious choice? Also, regarding the data modalities other than text, it would be great if authors could give a nice example.\\n\\n**On spectrum matching.** I see. It is still questionable to me how important the theoretical results are to the actual design decisions here, but is nevertheless useful to have one.\"}", "{\"title\": \"Response to Reviewer bk7b Comments - Part 1\", \"comment\": \"We thank the reviewer for the valuable feedback. We address the concerns raised. All the changes in the main manuscript are highlighted with blue color.\\n\\n**Comparison with SliceGPT with Alpaca calibration data.** Table 1 (Table 10 of the updated paper) contains a comparison of the performance of the LLaMA-2-7B and Phi-2 pruned with SliceGPT using the Alpaca calibration dataset (as opposed to the WikiText2 dataset used in the main text of our paper). While LLaMA-2-7B pruned with SliceGPT exhibits better average performance for all compression ratios, Phi-2 exhibits an opposite trend, wherein PruneNet beats SliceGPT by a consistent margin of at least $2$% for all compression ratios. This trend reinforces a fundamental limitation of calibration data-based pruning techniques, namely their sensitivity to the choice and quality of the calibration dataset and the choice of model architectures. In conjunction with this, it should also be noted that the Alpaca dataset is an instruction-tuning dataset (as opposed to WikiText2). Therefore, it is not unreasonable for the performance of pruned models calibrated with the Alpaca dataset on generative tasks to be better than that of a calibration-free technique like PruneNet.\\n\\n### Table 1. Results without RFT, where SliceGPT uses Alpaca as calibration data\\n\\n| Model | Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ------------ | ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| Llama-2-7B | 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| Llama-2-7B | 20% | SliceGPT | 76.5 | 65.51 | 65.2 | 69.99 | 41.21 | 63.68 |\\n| Llama-2-7B | 20% | PruneNet | 75.3 | 65.51 | 66.43 | 63.8 | 37.29 | 61.67 |\\n| Llama-2-7B | 25% | SliceGPT | 74.21 | 64.01 | 60.55 | 66.88 | 38.91 | 60.91 |\\n| Llama-2-7B | 25% | PruneNet | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| Llama-2-7B | 30% | SliceGPT | 72.25 | 59.83 | 55.86 | 63.93 | 37.8 | 57.93 |\\n| Llama-2-7B | 30% | PruneNet | 71.11 | 61.09 | 58.3 | 53.2 | 33.53 | 55.45 |\\n| Phi-2 | 0% | Dense | 79.11 | 75.77 | 73.83 | 78.32 | 54.18 | 72.24 |\\n| Phi-2 | 20% | SliceGPT | 76.17 | 68.75 | 61.95 | 72.18 | 45.48 | 64.90 |\\n| Phi-2 | 20% | PruneNet | 74.37 | 70.80 | 65.53 | 74.71 | 47.53 | 66.59 |\\n| Phi-2 | 25% | SliceGPT | 75.68 | 64.88 | 58.19 | 70.41 | 43.43 | 62.52 |\\n| Phi-2 | 25% | PruneNet | 74.37 | 68.98 | 62.18 | 70.54 | 44.45 | 64.10 |\\n| Phi-2 | 30% | SliceGPT | 74.05 | 62.12 | 53.31 | 67.26 | 39.42 | 59.23 |\\n| Phi-2 | 30% | PruneNet | 72.80 | 67.48 | 56.80 | 67.55 | 40.61 | 61.05 |\\n\\n**Motivation for calibration-free model compression techniques.** While it is true that there is an abundance of calibration data for textual modalities, it is not the *availability* of calibration data alone that determines the quality of a pruned model. A detailed study by William et al., 2024 reveals a large degree of sensitivity in the performance on downstream tasks of pruned models based on the selected calibration data and model architectures, which questions the usability of calibration-based pruning methods in data-private settings; this has made the problem of sampling high-quality calibration datasets an active area of model pruning research. Another major limitation of calibration-based pruning techniques is that of *reusability*, wherein pruning multiple models necessitates running the forward/backward passes of pruned models on the calibration datasets, making the overall process highly inefficient. In contrast, PruneNet aims to be the *first* pruning technique to alleviate these difficulties and can also run right out of the box in low-resource/compute settings. Finally, PruneNet can be used in data modalities other than text, where sampling calibration data might be costly.\"}", "{\"title\": \"Yet another request to check our responses\", \"comment\": \"Dear reviewer bk7b,\\n\\nWe only have a day left till the end of the discussion period. We earnestly request you to check our previous responses and comment back if you have any follow up question. We have put significant effort into conducting additional experiments and responded to your queries. Your acknowledgement is very important for us to present our work to the research community and give it the credit it deserves. \\n\\nThanks,\"}", "{\"title\": \"Response to Reviewer onBH Comments - Part 1\", \"comment\": \"We thank the reviewer for the valuable feedback. We address the concerns raised. All the changes in the main manuscript are highlighted with blue color.\\n\\n**A comparison of pruning on FFN1 vs FFN2 layers.** Table 1 (Table 19 of the updated paper) highlights the zero-shot performance of the LLaMA-2-7B model compressed with PruneNet with policy learned on different FFN matrices. In most cases, we observe marginal differences in the result ($< 1$%) when the policy is learned with FFN2 instead of FFN1. The observations emphasize that PruneNet is invariant to the choice of parameter used for learning the policy. \\n\\n### Table 1. A comparison of FFN1 vs FFN2 layers within the PruneNet framework for Llama-2-7B.\\n\\n| Compression Ratio | Layer | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | FFN1 | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | FFN2 | 74.81 | 66.93 | 67.38 | 61.24 | 36.86 | 61.44 |\\n| 25% | FFN1 | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | FFN2 | 70.13 | 57.30 | 59.98 | 55.51 | 34.22 | 55.43 |\\n| 30% | FFN1 | 71.11 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | FFN2 | 72.20 | 61.56 | 60.01 | 54.12 | 33.70 | 56.32 |\\n\\n**Usage of reward functions other than the KS distance.** It is important to notice that compression reduces the cardinality of the spectrum. Therefore, the traditional pointwise distance measures (like Euclidean/Cosine/Frobenius) are not applicable for comparing the distance between the spectrum structure of uncompressed and compressed models. Therefore, we resort to probabilistic distance measures that can capture the shift in distribution structures of the compressed model. \\nTo further understand the effectiveness of PruneNet under different distance measures, we evaluate the LLaMA-2-7B model compressed using PruneNet with non-parametric Anderson\\u2013Darling measure of agreement. Table 2 (Table 18 of the updated paper) highlights the effectiveness of PruneNet with both Kolmogorov-Smirnov (highlighted as KS) and Anderson\\u2013Darling (highlighted as AD) distance measures. Under both reward functions, we observe a similar performance of PruneNet for different compression ratios with the LLaMA-2-7B model. The results further emphasize the stability of our proposed compression method under different choice metrics. \\n\\n### Table 2. Zero-shot performance of Llama-2-7B compressed with PruneNet with different reward functions.\\n\\n| Compression Ratio | Reward Function | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | KS | 75.30 | 65.50 | 66.43 | 63.80 | 37.20 | 61.66 |\\n| 20% | AD | 73.01 | 63.30 | 65.70 | 60.40 | 37.46 | 59.97 |\\n| 25% | KS | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | AD | 73.88 | 61.17 | 63.98 | 61.62 | 35.84 | 59.30 |\\n| 30% | KS | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | AD | 72.13 | 61.88 | 60.18 | 58.00 | 33.62 | 57.16 |\\n\\n**Motivation behind the usage of discounted rewards and singular values of deeper layers.** Weight matrices of LLMs have been known to exhibit a variation in the distribution of singular values across layers [1]. To further verify our observations from Figure 1 of the main text, we conduct a similar study on the FFN1 matrices of the OPT and Phi-2 models. We plot the cumulative distribution of the singular values of these matrices and observe consistent behaviour across all three model architectures. Figures 5a and 5b in the updated paper highlight the spectrum of the FFN1 module at different layers for Phi-2 and OPT-2.7B models, respectively. This further strengthens our assumption of using discounted rewards in our policy learning approach.\"}", "{\"title\": \"Response to Reviewer ZGwT Comments - Part 1\", \"comment\": \"We thank the reviewer for the valuable feedback. We address the concerns raised. All the changes in the main manuscript are highlighted with blue color.\\n\\n**Structural interdependencies with subsequent layers.** We argue that the interdependencies between subsequent layers are already captured during the pre-training phase of the LLM. Therefore, the spectral structure of the feedforward layers already preserves the information propagation between the subsequent layers. By preserving the spectral structure after compression, PruneNet preserves the information that needs to be maintained at later layers, therefore omitting additional effort to preserve the information flow between interconnected layers. \\n\\n**A comparison with simple random sampling.** We highlight the results with random policy in Table 1 (Table 16 of the updated paper), where we select the pruned indices randomly for each model parameter. We refer to this as a 'random' selection. We observe an average $2$% drop with a random selection method, demonstrating the need for a learnable policy for effective model compression.\\n\\n### Table 1. Effect of learnable policy on compressed Llama-2-7B model. \\n\\n| Compression Ratio | Selection | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | Policy-based | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | Random | 72.36 | 63.14 | 61.18 | 60.31 | 36.52 | 58.70 |\\n| 25% | Policy-based | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | Random | 70.13 | 60.38 | 58.38 | 56.27 | 35.67 | 56.20 |\\n| 30% | Policy-based | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | Random | 73.13 | 59.35 | 55.15 | 49.83 | 31.66 | 53.90 |\\n\\nMoreover, Table 2 (Table 17 of the updated paper) highlights the results with LLaMA-2-7B with deterministic and stochastic (policy learned with PruneNet) policies. In the deterministic policy, we chose only the topk (k depends on the compression ratio) parameters based on the importance metric defined in Equation 3. We observe that deterministic policy often underperforms the stochastic policy with an average margin of $4$%. The results highlight that learnable parameters alone are insufficient to determine which ones to compress. Preserving the spectral structure between the compressed and uncompressed models is also critical to ensure minimal performance drop post-compression.\\\\newline\\n\\n### Table 2. Effect of stochastic policy on compressed Llama-2-7B model.\\n\\n| Compression Ratio | Policy | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | Stochastic | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | Deterministic | 72.91 | 61.64 | 61.05 | 56.69 | 36.52 | 57.76 |\\n| 25% | Stochastic | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | Deterministic | 69.97 | 59.27 | 59.39 | 56.69 | 33.53 | 55.77 |\\n| 30% | Stochastic | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | Deterministic | 69.64 | 58.25 | 54.45 | 54.97 | 31.23 | 53.71 |\"}", "{\"title\": \"Subsequent reminder to read our responses\", \"comment\": \"Dear bk7b,\\n\\nThis is another reminder to check our responses to your recent comments. We have meticulously addressed all your new concerns. We sincerely request you check our responses.\\n\\nThanks\"}", "{\"title\": \"A gentle reminder to check our responses\", \"comment\": \"Dear reviewer ZGwT,\\n\\nWe have addressed all the concerns you raised with additional empirical evidence. We request you to kindly review our responses and let us know if you have further questions.\\n\\nWe will make our best effort to clarify your doubts and concerns.\\n\\nThanks,\"}", "{\"title\": \"Possibly last reminder to check our comments\", \"comment\": \"Dear reviewer bk7b,\\n\\nIt is unfortunate that even after so many reminders we were not able to receive any further response from your end. We tried our best to provide answers to all the queries you raised. This is our final request to you to check our previous comments. We are ready to address other concerns if you have any.\\n\\nThanks,\"}", "{\"summary\": \"This paper introduces PruneNet, a structured pruning technique that deploys a policy learning process to identify important parameters. The proposed method is calibration-free, and once completed, can be applied to different pruning ratios without repeated assmentments of weights and data. The proposed method can be applied to popular Llama-2-7B in just 15 minutes, which is efficient enough for LLM pruning. The author conducted experiments on Llama-2 7B and Phi-2, with the compression ratio from 20% to 30%. Compared to SliceGPT, the method archives better accuracy on 5 zero-shot tasks, keeping >80% performance of the original LLMs. Besides, even without fine-tuning, the method is still able to preserve a good average accuracy.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The idea of calibration-free pruning is interesting. The policy learned on one compression ratio is scalable and can be directly transferred to higher or lower ratios. This is a useful feature if one would like to evaluate the performance of pruned models for a good trade-off between performance and efficiency. In addition, Extensive experiments were conducted to evaluate the effectiveness of the policy learning, with different finetuning settings and compression ratios, The proposed method is able to achieve better average accuracy (+3%~+4%) on zero-shot tasks.\", \"weaknesses\": \"1. It appears that most of the experiments in this paper were conducted without fine-tuning. It would be insightful to examine how the proposed method compares to other baselines when fine-tuning is enabled, as this could provide a more complete picture of its performance under optimal conditions.\\n2. The efficiency of policy learning has not been studied. Providing more analysis on the efficiency of policy learning would enhance the understanding of its practical impact.\\n3. Based on point 3, if we allocate the same computational budget across all baselines, such as assigning the training cost of policy learning to LoRA fine-tuning in other baselines, does the proposed method still achieve superior performance?\\n4. Including pseudocode for the entire pipeline would improve clarity and reproducibility.\\n5. On of my main concerns is the lack of clarity behind the motivation for each design choice. For example, Equation 3 is introduced without sufficient context or explanation. The KS Distance introduced in Equation 6 also reveals weak insights about its motivation. Why KS distance instead of other distribution distance? In addition, other paragraphs could also benefit from refinement. In particular, the background section, especially sections 3.2 and 3.2, is densely packed with technical details and formulas about SliceGPT and eigenvalues. However, the relevance to the proposed method remains unclear until the final paragraph, creating a disconnect that might hinder reader understanding.\", \"questions\": \"May I ask about the motivation behind Equation 3 (the importance score)? It appears that the equation is designed to project the parameter $W$ into a score $d$-dim vector. Could you clarify why this approach was chosen over other designs, such as directly optimizing a d-dimensional vector as the importance score? Additionally, will the learned matrix be shared across all linear weight layers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledging Reviewer ooUi's Comment\", \"comment\": \"Dear reviewer ooUi,\\n\\nWe thank you for your effort in reassessing our work and increasing the overall rating. We are committed to addressing all your concerns into the paper. \\n\\nThanks,\"}", "{\"title\": \"Could you please check our response?\", \"comment\": \"Dear reviewer bk7b,\\n\\nWe have addressed your recent concerns with new sets of experiments. Could you please check our responses? If you feel our responses address your questions, please consider reassessing our paper. We are ready to address other concerns if any?\\n\\nThanks\"}", "{\"comment\": \"Thanks for the detailed response. Will update the score to 6. Good luck!\"}", "{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer bk7b,\\n\\nThank you very much for your response and for raising the score. We are committed to incorporate all your suggestions into the manuscript. \\n\\nThanks\"}", "{\"title\": \"Response to Reviewer onBH Comments - Part 2\", \"comment\": \"**Detailed results of the one-sided KS test.** For LLaMA-2-7B and Phi-2 models, we calculate the recovered performance ($\\\\frac{\\\\text{Compressed model performance}}{\\\\text{Dense model performance}}$) for different compression ratios with PruneNet and SliceGPT. We conducted a one-sided KS test with the null hypothesis as SliceGPT recovered performance higher than PruneNet recovered performance for different compression ratios. For a compression ratio of $20$%, we obtained the test statistics value of 0.5 and $p$-value of $0.043$, suggesting that we reject the null hypothesis. Similarly, for compression ratio $25$% and $30$% we obtain statistics $0.5$ ($p$-value $0.043$) and $0.6$ ($p$-value $0.026$), respectively. With these statistical results, we conclude that the performance recovered with PruneNet is higher than SliceGPT with statistical significance.\\n\\n**Why does RFT *reduce* performance of models like Phi2?** Although interpreting the performance of a model compression method requires much detailed analysis, which is much out of scope for our current study, we can make certain educated guesses to understand the behaviours of different LLMs under compression. One possible reason behind the inferior performance of Phi-2 under RFT could be attributed to its pre-training objective. Phi-2 is pre-trained on a vast amount of synthetic texts [2], which allows it to perform reasonably well on common sense and logical reasoning tasks (the tasks used in our work) even without fine-tuning. Therefore, fine-tuning on additional datasets like WikiText2 or Alpaca could hurt the model's performance. Table 2 of the paper shows that the post-compression Phi-2 model demonstrates higher performance recovery than any other LLM, indicating its robustness on complex reasoning tasks, even after compression. Unlike SliceGPT, PruneNet preserves the original knowledge base of LLMs by preserving spectral structures. Therefore, additional fine-tuning can distort the reasoning abilities of the compressed LLM, adversely affecting its zero-shot performance. \\n\\n**A comparison of PruneNet with SliceGPT with RFT.** We report the results with LLaMA-2-7B with RFT on Wikitext2 and Alpaca datasets in Table 3 (Table 11 of the updated paper) and Table 4 (Table 12 of the updated paper), respectively. With RFT on the Wikitext2 dataset, PruneNet achieves, on average, $>5$% accuracy than SliceGPT. However, SliceGPT outperforms PruneNet when fine-tuned on the Alpaca dataset, with an average margin of $3$%. As SliceGPT uses the same datasets for calibration and RFT, it typically has access to more instruction-tuning datasets than PruneNet, allowing it to do better when fine-tuned on the Alpaca dataset. However, it is worth noticing that the average standard deviation in the performance of SliceGPT after RFT is significantly high ($5.5$) compared to PruneNet ($1.5$). The low standard deviation highlights the robustness of PruneNet when fine-tuned on different datasets to recover information loss during compression.\\n\\n### Table 3. Results with RFT on WikiText2 on Llama-2-7B\\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| 20% | SliceGPT | 69.86 | 64.72 | 61.07 | 54.25 | 36.73 | 57.27 |\\n| 20% | PruneNet | 74.76 | 66.22 | 69.38 | 65.61 | 39.25 | 63.04 |\\n| 25% | SliceGPT | 69.26 | 64.96 | 58.65 | 52.36 | 35.75 | 56.20 |\\n| 25% | PruneNet | 74.37 | 66.46 | 65.71 | 60.82 | 36.60 | 60.79 |\\n| 30% | SliceGPT | 67.41 | 63.22 | 55.65 | 50.76 | 34.13 | 54.23 |\\n| 30% | PruneNet | 73.01 | 63.46 | 63.21 | 60.14 | 35.92 | 59.15 |\\n\\n### Table 4. Results with RFT on Alpaca on Llama-2-7B\\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| 20% | SliceGPT | 76.55 | 65.59 | 68.26 | 71.84 | 45.05 | 65.46 |\\n| 20% | PruneNet | 72.73 | 62.25 | 66.45 | 61.52 | 42.15 | 61.02 |\\n| 25% | SliceGPT | 75.79 | 63.22 | 65.12 | 68.22 | 42.83 | 63.04 |\\n| 25% | PruneNet | 75.79 | 62.35 | 65.48 | 60.94 | 39.16 | 60.74 |\\n| 30% | SliceGPT | 74.59 | 61.64 | 63.06 | 66.54 | 40.87 | 61.34 |\\n| 30% | PruneNet | 72.14 | 62.75 | 62.38 | 55.43 | 37.03 | 57.95 |\"}", "{\"title\": \"Response to Reviewer ZGwT Comments - Part 2\", \"comment\": \"**Efficacy of PruneNet on larger models.** Table 3 (Table 14 in the updated paper) highlights the results with PruneNet and SliceGPT for different compression ratios for the LLaMA-2-13B model. The performance drops drastically for larger models like LLaMA-13B at a high compression ratio. However, the results in Table 4 (Table 15 in the updated paper) highlight that after recovery fine-tuning, the compressed models regain the performance quickly and can preserve up to $84$% of the original uncompressed model performance, even at a high compression rate of $30$%. PruneNet also outperforms SliceGPT with a margin of $2$%, showcasing a similar trend as the smaller LLMs used in our study.\\n\\n### Table 3. Llama-2-13B results without RFT. \\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 80.47 | 72.22 | 79.39 | 77.48 | 49.23 | 71.76 |\\n| 20% | SliceGPT | 71.87 | 69.38 | 63.04 | 69.87 | 43.09 | 63.45 |\\n| 20% | PruneNet | 77.15 | 66.38 | 72.90 | 70.50 | 41.81 | 65.75 |\\n| 25% | SliceGPT | 68.55 | 67.48 | 58.10 | 62.50 | 37.88 | 58.90 |\\n| 25% | PruneNet | 70.89 | 62.43 | 58.67 | 58.63 | 34.04 | 56.93 |\\n| 30% | SliceGPT | 66.10 | 65.11 | 52.69 | 56.82 | 35.07 | 55.16 |\\n| 30% | PruneNet | 61.92 | 56.99 | 35.65 | 46.34 | 28.33 | 45.87 |\\n\\n### Table 4. Llama-2-13B results with RFT on WikiText2. \\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 80.47 | 72.22 | 79.39 | 77.48 | 49.23 | 71.76 |\\n| 20% | SliceGPT | 74.10 | 68.51 | 66.94 | 70.54 | 43.77 | 64.77 |\\n| 20% | PruneNet | 76.22 | 68.43 | 70.72 | 66.88 | 42.83 | 65.02 |\\n| 25% | SliceGPT | 71.27 | 68.98 | 64.12 | 63.76 | 40.87 | 61.80 |\\n| 25% | PruneNet | 76.93 | 64.80 | 70.44 | 66.96 | 40.36 | 63.90 |\\n| 30% | SliceGPT | 69.64 | 66.85 | 59.93 | 59.55 | 38.65 | 58.92 |\\n| 30% | PruneNet | 73.45 | 65.59 | 64.50 | 60.73 | 38.57 | 60.57 |\"}", "{\"title\": \"Review for rebuttal\", \"comment\": \"Thanks for the detailed rebuttal. The authors have addressed all my questions and I change my score accordingly. All the best with your submission.\"}", "{\"comment\": \"Dear authors,\\n\\nSorry for the late comeback.\\n\\nI appreciate much discussion regarding the availability of validation dataset. I strongly recommend including the discussion to the manuscript. Although I do have some unclear points regarding the spectrum matching, I am now convinced that this paper makes meaningful progress. Have raised my evaluation accordingly.\\n\\nBest regards, \\nAuthors.\"}", "{\"title\": \"Response to Reviewer ooUi Comments - Part 1\", \"comment\": \"We thank the reviewer for the valuable feedback. We address the concerns raised. All the changes in the main manuscript are highlighted with blue color.\\n\\n**Comparison of PruneNet with other compression methods with RFT enabled.** We report the results with LLaMA-2-7B with RFT on Wikitext2 and Alpaca datasets in Table 1 (Table 11 of the updated paper) and Table 2 (Table 12 of the updated paper), respectively. With RFT on the Wikitext2 dataset, PruneNet achieves, on average, $>5$% accuracy than SliceGPT. However, SliceGPT outperforms PruneNet when fine-tuned on the Alpaca dataset, with an average margin of $3$%. As SliceGPT uses the same datasets for calibration and RFT, it typically has access to more instruction-tuning datasets than PruneNet, allowing it to do better when fine-tuned on the Alpaca dataset. However, it is worth noticing that the average standard deviation in the performance of SliceGPT after RFT is significantly high ($5.5$) compared to PruneNet ($1.5$). The low standard deviation highlights the robustness of PruneNet when fine-tuned on different datasets to recover information loss during compression.\\n\\n### Table 1. Results with RFT on WikiText2 on Llama-2-7B\\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| 20% | SliceGPT | 69.86 | 64.72 | 61.07 | 54.25 | 36.73 | 57.27 |\\n| 20% | PruneNet | 74.76 | 66.22 | 69.38 | 65.61 | 39.25 | 63.04 |\\n| 25% | SliceGPT | 69.26 | 64.96 | 58.65 | 52.36 | 35.75 | 56.20 |\\n| 25% | PruneNet | 74.37 | 66.46 | 65.71 | 60.82 | 36.60 | 60.79 |\\n| 30% | SliceGPT | 67.41 | 63.22 | 55.65 | 50.76 | 34.13 | 54.23 |\\n| 30% | PruneNet | 73.01 | 63.46 | 63.21 | 60.14 | 35.92 | 59.15 |\\n\\n### Table 2. Results with RFT on Alpaca on Llama-2-7B\\n\\n| Compression Ratio | Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 0% | Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| 20% | SliceGPT | 76.55 | 65.59 | 68.26 | 71.84 | 45.05 | 65.46 |\\n| 20% | PruneNet | 72.73 | 62.25 | 66.45 | 61.52 | 42.15 | 61.02 |\\n| 25% | SliceGPT | 75.79 | 63.22 | 65.12 | 68.22 | 42.83 | 63.04 |\\n| 25% | PruneNet | 75.79 | 62.35 | 65.48 | 60.94 | 39.16 | 60.74 |\\n| 30% | SliceGPT | 74.59 | 61.64 | 63.06 | 66.54 | 40.87 | 61.34 |\\n| 30% | PruneNet | 72.14 | 62.75 | 62.38 | 55.43 | 37.03 | 57.95 |\"}", "{\"title\": \"A gentle reminder to check our responses\", \"comment\": \"Dear reviewer ooUi,\\n\\nWe have addressed all the concerns you raised with additional empirical evidence. We request you to kindly review our responses and let us know if you have further questions.\\n\\nWe will make our best effort to clarify your doubts and concerns.\\n\\nThanks,\"}", "{\"title\": \"Response to Reviewer ooUi Comments - Part 3\", \"comment\": \"**A detailed motivation behind design choices.** We perform an ablation study to understand the importance of different components of PruneNet.\\n\\nWe conduct experiments with a random selection process, where the pruned parameter indices are chosen randomly. We highlight the results with random policy in the above Table 3 (Table 16 of the updated paper). We observe an average $2$% drop with a random selection method, justifying the need to learn which parameters to compress for more effective model compression.\\n\\nAbove Table 4 (Table 17 of the updated paper) highlights the results with LLaMA-2-7B with deterministic and stochastic (policy learned with PruneNet in Equation 5 of the paper) policies. In the deterministic policy, we chose only the topk parameters based on the importance metric defined in Equation 3. We observe that deterministic policy often underperforms the stochastic policy with an average margin of $4$%. The results highlight that parameter importance alone cannot determine which parameters to compress. Preserving the spectral structure between the compressed and uncompressed models is critical to ensure minimal performance drop post-compression.\\n\\nTo further understand the effectiveness of PruneNet under different distance measures, we evaluate the LLaMA-2-7B model compressed using PruneNet with non-parametric Anderson\\u2013Darling measure of agreement. Table 5 (Table 18 of the updated paper) highlights the effectiveness of PruneNet with both Kolmogorov-Smirnov (highlighted as KS) and Anderson\\u2013Darling (highlighted as AD) distance measures. Under both reward functions, we observe a similar performance of PruneNet for different compression ratios with the LLaMA-2-7B model. The results further emphasize the stability of our proposed compression method under different choice metrics.\\n\\n### Table 5. Zero-shot performance of Llama-2-7B compressed with PruneNet with different reward functions.\\n\\n| Compression Ratio | Reward Function | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | KS | 75.30 | 65.50 | 66.43 | 63.80 | 37.20 | 61.66 |\\n| 20% | AD | 73.01 | 63.30 | 65.70 | 60.40 | 37.46 | 59.97 |\\n| 25% | KS | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | AD | 73.88 | 61.17 | 63.98 | 61.62 | 35.84 | 59.30 |\\n| 30% | KS | 71.13 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | AD | 72.13 | 61.88 | 60.18 | 58.00 | 33.62 | 57.16 |\\n\\nTable 6 (Table 19 of the updated paper) highlights the zero-shot performance of the LLaMA-2-7B model compressed with PruneNet with policy learned on different FFN matrices. In most cases, we observe marginal differences in the result ($< 1$%) when the policy is learned with FFN2 instead of FFN1. The observations emphasize that PruneNet is invariant to the choice of parameter used for learning the policy. \\n\\n### Table 6. A comparison of FFN1 vs FFN2 layers within the PruneNet framework for Llama-2-7B.\\n\\n| Compression Ratio | Layer | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | FFN1 | 75.30 | 65.50 | 66.43 | 63.80 | 37.29 | 61.66 |\\n| 20% | FFN2 | 74.81 | 66.93 | 67.38 | 61.24 | 36.86 | 61.44 |\\n| 25% | FFN1 | 72.09 | 62.43 | 62.33 | 60.14 | 36.18 | 58.63 |\\n| 25% | FFN2 | 70.13 | 57.30 | 59.98 | 55.51 | 34.22 | 55.43 |\\n| 30% | FFN1 | 71.11 | 61.09 | 58.30 | 53.20 | 33.53 | 55.45 |\\n| 30% | FFN2 | 72.20 | 61.56 | 60.01 | 54.12 | 33.70 | 56.32 |\"}", "{\"title\": \"Response to Reviewer onBH Comments - Part 3\", \"comment\": \"**An implementation of policy-based pruning for attention layers.** We carry out a similar pruning strategy on attention layers, wherein we learn importance scores of the rows for the *output matrix* of the attention layer, and correspondingly slice off the key, query and value matrices to keep the output dimension consistent. We highlight the pruning results for the LLaMA-2-7B model with compressed self-attention layers in Table 5 (Table 20 of the updated paper). The performance drop with the compressed models suggests that compressing self-attention layers intrinsically is harder than compressing FFN layers. However, around $50$% of the performance drop can be recovered with recovery fine-tuning.\\n\\n### Table 5. Results with Llama-2-7B with pruned self-attention modules.\\n\\n| Compression Ratio | RFT | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ---------- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| 20% | None | 50.11 | 52.49 | 26.24 | 27.31 | 28.75 | 36.98 |\\n| 20% | WikiText2 | 71.98 | 56.83 | 53.04 | 55.13 | 33.45 | 54.09 |\\n| 25% | None | 50.11 | 50.28 | 26.25 | 25.99 | 28.16 | 36.16 |\\n| 25% | WikiText2 | 69.42 | 54.93 | 40.24 | 49.87 | 32.68 | 49.43 |\\n| 30% | None | 50.59 | 49.57 | 26.58 | 26.56 | 26.54 | 35.97 |\\n| 30% | WikiText2 | 62.46 | 52.01 | 47.53 | 38.55 | 28.75 | 45.86 |\\n\\n**Performance drops for PruneNet with high compression ratios ($>40$%).** Table 6 (Table 13 of the updated paper) highlights the performance of the LLaMA-2-7B model at $50$% compression ratio with PruneNet and SliceGPT. While both methods can regain only $60$% of the performance of the original uncompressed model, PruneNet demonstrates $2$% better than the SliceGPT, showcasing its effectiveness over the baseline, even at a very high compression rate.\\n\\n### Table 6. Results with 50% compression ratio on Llama-2-7B \\n\\n| Method | PIQA | WinoGrande | HellaSwag | ARC-e | ARC-c | Avg. |\\n| ----- | ------- | ------- | ------- | ------- | ------- | -------- |\\n| Dense | 79.11 | 69.06 | 75.99 | 74.58 | 46.25 | 69.00 |\\n| SliceGPT | 53.97 | 53.04 | 32.65 | 34.76 | 23.72 | 39.63 |\\n| PruneNet | 59.08 | 52.09 | 35.21 | 34.89 | 25.43 | 41.46 |\\n\\n**Format: Sparsity and Effective Sparsity.** We thank the reviewer for the suggestion. We have added the explanation in Appendix A.2 of the updated paper.\\n\\n**References**\\n\\n[1] Yuan, Zhihang, Yuzhang Shang, Yue Song, Qiang Wu, Yan Yan, and Guangyu Sun. \\\"Asvd: Activation-aware singular value decomposition for compressing large language models.\\\" arXiv preprint arXiv:2312.05821 (2023).\\n\\n[2] Li, Yuanzhi, S\\u00e9bastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. \\\"Textbooks are all you need ii: phi-1.5 technical report.\\\" arXiv preprint arXiv:2309.05463 (2023).\"}", "{\"metareview\": \"The paper introduces PruneNet, a novel structured pruning method for LLMs that eliminates the need for calibration datasets. By learning a stochastic policy to preserve the spectral structure of model weights, PruneNet achieves superior efficiency and robustness compared to methods like SliceGPT, retaining high performance even at significant compression ratios. The ability to transfer policies across compression ratios and compress models in minutes highlights its practical value for low-resource and privacy-sensitive settings.\\n\\nThe paper is well-written, with strong empirical validation and thorough comparisons, addressing reviewer concerns effectively. While minor limitations, such as limited tests on larger models, were noted, the authors\\u2019 detailed rebuttals and additional results strengthened the work. Overall, this paper provides a significant contribution to model compression, offering an efficient and scalable solution. I recommend its acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about the limited analysis of design choices, the scope of comparisons with baselines like SliceGPT, and the theoretical justification of certain metrics (e.g., KS distance). The authors addressed these by providing additional ablations, broader comparisons (including RFT and larger models), and clarified theoretical motivations and results. They also extended experiments to alternative settings, such as attention layers and higher compression ratios.\"}", "{\"title\": \"Response to Reviewer ooUi Comments - Part 3 (contd.)\", \"comment\": \"**Response to Question.** The key motivation behind the design of PruneNet is to have a predictor network which can compute the importance scores of the rows of *any* weight matrix; this allows us to reuse the same learned predictor network to prune the FFN weight matrix of *any* layer in the model, which turns out to be extremely efficient compared to other SOTA pruning methods.\\n\\nSome clarification about equation (3) of main text is in order. Note that the output of equation (3) is an $n$-dimensional vector (and not a $d$-dimensional vector), where $n$ is the number of rows in the weight matrix $W$. In essence, the predictor network is computing importance scores for each row of a weight matrix in two steps: in the first part of equation (3), the network computes the *relative importance* of rows amongst themselves. This is needed since the importance of a row is potentially correlated with the importance of other rows of the weight matrix. The second part of equation (3), particularly the matrix multiplication within $\\\\sigma$, simply projects the $n\\\\times n$ matrix of relative importance scores into an output vector of importance scores of each row. In contrast, as suggested by the reviewer, directly optimizing a vector of importance scores conflicts with our design philosophy of **reusability** of the learned predictor network.\", \"regarding_the_use_of_a_distributional_distance\": \"since pruning reduces the cardinality of the spectrum, the use of traditional pointwise distance measures (like Euclidean/Cosine/Frobenius norm) are not applicable for comparing the spectrum structure of uncompressed and compressed models. Due to this, we resort to distributional distance measures to capture the shift in the spectrum of model; a specific structural argument is given in Corollary 3.3. As mentioned above, we also evaluated the effectiveness of PruneNet under two different distance measures, namely the KS distance and the non-parametric Anderson-Darling measure of agreement.\\n\\nWe also explained above that directly optimizing the parameter importance often leads to poorer performance than the learned compression policy. We reuse the same policy learner for all the layers within the LLM. The policy learner is agnostic to the model architecture and learns a mapping between the compressed and uncompressed model parameters. The simplicity of the policy learner allows us to reuse the same model to compress an LLM at any given compression ratio, offering greater flexibility and efficiency.\"}" ] }
5RUM1aIdok
GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation
[ "Tao Feng", "Yihang Sun", "Jiaxuan You" ]
The powerful capabilities of Large Language Models (LLMs) have led to their growing use in evaluating human-generated content, particularly in evaluating research ideas within academic settings. Existing solutions primarily rely on prompt-based LLM methods or fine-tuned lightweight language models for idea evaluation. However, these methods are often unstable and struggle to comprehend the complex semantic information embedded in the ideas, impeding their ability to perform high-quality evaluations. To address the above challenges, we propose $\texttt{GraphEval}$, a lightweight graph-based LLM framework for idea evaluation. Our insight is that a complex idea can be broken down into comprehensible viewpoint nodes using prompts from small LLMs. These viewpoint nodes can then be linked together through edges created from LLM-based relation extraction and/or BERT similarity scores. The created viewpoint-graph can be used to conveniently propagate scores across view-nodes to improve the robustness of the idea evaluations. In particular, we propose two lightweight graph-based methods for idea evaluation: (1) GraphEval-LP: a training-free label propagation algorithm that propagates evaluation scores from known view-nodes to unknown nodes; (2) GraphEval-GNN: a Graph Neural Networks (GNN) that is trained to predict the evaluation scores given the observed graph with minimal computation resources. Moreover, to overcome LLM's limitation in objectively assessing the novelty of ideas, we further propose a novelty detection model to GraphEval-GNN to enhance its capability in judging idea novelty. Experiments on two datasets show $\texttt{GraphEval}$ improves F1 scores by at least 14% with low computation and API costs. Additionally, $\texttt{GraphEval}$ can effectively detect plagiarized ideas.
[ "Idea Evaluation", "View-graph", "Lightweight model", "Label propagation", "Graph prediction" ]
Accept (Poster)
https://openreview.net/pdf?id=5RUM1aIdok
https://openreview.net/forum?id=5RUM1aIdok
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zoEaOBEexY", "sKfov8UrfE", "rEXNMg8gru", "qYbc8Cxiqk", "o150ri2N5l", "k7PSTOt0j3", "jGYVQ7ptZF", "htzCcSPPBI", "f83JnlBM6y", "bTDI8lLi1P", "b83ikV3sTz", "ZnsyfXN1zA", "Ze83pLKGeF", "Z1h9IGqDva", "YrcOrDoNze", "XcrxMtg095", "VB1jNzs1l3", "UhaTGoVd7g", "NsS0umxETy", "NctjZYJfFq", "NXoPzc2lhT", "MxF6NVYlsp", "MdS6nVFqP0", "LTn1PKyYoP", "KllT2i6otv", "K12wLzn78E", "JQJGq7lfzs", "J5giQWNkkR", "Hb3hWMNu7h", "Ca1iMS12dB", "6lb3zpcVMq", "69qolGFarT", "5a5bMx07lU", "4se0IXMTwf", "1xMbCnn7ZL", "0ahGNCKfYk" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732213508819, 1732706974866, 1737524097872, 1733161142570, 1732769407459, 1734401801164, 1730510033890, 1730512768252, 1732213089323, 1732943175968, 1732212392148, 1732955370899, 1732561073154, 1730162708759, 1732421346509, 1732634773999, 1732421287964, 1732422227877, 1732213341017, 1732213841303, 1732211541864, 1732955700319, 1732213610751, 1732685267550, 1732500770710, 1733257778633, 1732421392274, 1732212654820, 1732771769151, 1732515044137, 1732685790231, 1732215147751, 1732767951444, 1732763002715, 1730482095156, 1732265066481 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_d4yk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Area_Chair_Sya7" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_oRRW" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_zpR7" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_zpR7" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_5M8a" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_5M8a" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Area_Chair_Sya7" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_oRRW" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_d4yk" ], [ "ICLR.cc/2025/Conference/Submission11021/Authors" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_d4yk" ], [ "ICLR.cc/2025/Conference/Submission11021/Reviewer_d4yk" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer d4yk (3/3)\", \"comment\": \"**Continuation of the response to Question 5**:\\n\\n**[Computational complexity and efficiency]** Actually, compared to many influential papers in the field of large language models (LLMs) [5,6], which do not report data on method cost and efficiency, we have already documented the token cost and price (Normed Cost) for each method in Tables 2 and 3. However, following the reviewers' suggestions, we have included a discussion on computational complexity, efficiency, and resource requirements in the revised PDF ***[lines 425-431]***. Specifically, in terms of computational complexity, we calculated the average GPU memory usage for GraphEval-GNN and Fine-tuned BERT for the two tasks, which are 372MB and 4.84 GB respectively. The detailed modifications are as follows: During the training phase, we configured the graph neural network as a two-layer weighted GNN with a hidden dimension of 64. The batch size is set to 64, and the maximum number of training epochs is limited to 1000. We employ the Adam optimizer (Diederik, 2014) for training and gradually reduce the learning rate from 1e-3 to 0 using a LambdaLR scheduler. Our proposed method is implemented using PyTorch and PyTorch Geometric, with all experiments conducted on a single NVIDIA A100 Tensor Core GPU. For the LLMs, we utilize API calls from Together AI4 to obtain responses. Additionally, the average GPU memory usage of GraphEval-GNN for the two tasks is 372MB, whereas Fine-tuned BERT utilizes 4.84 GB on average.\\n\\n**[1]** Chen, D., Lin, Y., Li, W., Li, P., Zhou, J., & Sun, X. (2020, April). Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 04, pp. 3438-3445).\\n\\n**[2]** Rusch, T. K., Bronstein, M. M., & Mishra, S. (2023). A survey on oversmoothing in graph neural networks. arXiv preprint arXiv:2303.10993.\\n\\n**[3]** Wu, F., Souza, A., Zhang, T., Fifty, C., Yu, T., & Weinberger, K. (2019, May). Simplifying graph convolutional networks. In International conference on machine learning (pp. 6861-6871). PMLR.\\n\\n**[4]** He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., & Wang, M. (2020, July). Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval (pp. 639-648).\\n\\n**[5]** Park, J. S., O'Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023, October). Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th annual acm symposium on user interface software and technology (pp. 1-22).\\n\\n**[6]** Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., & Narasimhan, K. (2024). Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36.\\n\\n**Q6. In the left of Figure 1, are the colors of the positive and negative prompts correctly marked? Specifically, is \\\"If a paper is good or you are unsure, give it good scores and accept it.\\\" intended as a negative prompt?** \\n\\n**Response:** We are sorry for the confusion. Our intention was to demonstrate that different prompts significantly influence LLM-based methods, thereby proving that LLM-based approaches possess strong biases in idea evaluation. Unfortunately, we inadvertently reversed the colors for positive and negative prompts. We have corrected this in the current version. Thank you again for your feedback.\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"I still believe that a theoretical analysis or justification of the proposed mechanisms, such as the viewpoint-graphs extraction and the plagiarism detection mechanism, would further strengthen the paper's contribution and make it more solid.\\n\\nTherefore, I keep my recommendation rating of 6 for this paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"A friendly reminder for further discussion\", \"comment\": \"Dear Reviewer oRRW,\\n\\nWe hope this message finds you well. The rebuttal phase ends today and we would like to know if our further response has completely addressed your concerns. **In our further response, we have validated the good scalability and generalization capabilities of GraphEval-GNN on the large-scale ASAP-Review dataset.** We believe that we have addressed all of your previous concerns. We would really appreciate that if you could check our response and revised manuscript. If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score. Looking forward to hearing back from you.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Express sincere thanks for the reviewer's constructive feedback\", \"comment\": \"Thank you for the reviewer's insightful feedback and hard work. Your valuable suggestions have significantly improved our paper. We are pleased to hear that our responses have addressed most of your concerns. During the rebuttal phase, we followed the reviewers' suggestions and conducted numerous new settings and experimental trials. Consequently, we are continuously organizing the code and adding necessary comments to ensure its readability and usability. We commit to including the code link in the potential camera-ready version of the paper to guarantee its reproducibility. We are committed to incorporating the suggested changes in our revisions to further enhance the manuscript. By the way, as Thanksgiving is approaching, we also sincerely wish you a Happy Thanksgiving. Thank you for the valuable feedback and hard work you provided during the rebuttal phase.\"}", "{\"metareview\": \"The paper introduces a novel framework that leverages graph structures to enhance the evaluation of research ideas using large language models (LLMs). The framework, which includes GraphEval-LP and GraphEval-GNN, demonstrates improvements in robustness and accuracy, outperforming baseline methods by a large margin.\\n\\nReviewers are very positive about the paper in general. However, they have also pointed out some limitations, such as the small dataset size used for evaluation, which may affect the generalizability of the results, and concerns about the scalability of GraphEval-GNN for larger datasets. The paper also lacks a thorough theoretical analysis of the proposed mechanisms. Despite these issues, the innovative approach and significant performance improvements make this paper a valuable contribution to the field. The rebuttal includes some new results which are suggested to be included to the paper.\\n\\nTherefore, I recommend accepting.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed most concerns of the reviewers. One reviewer was not actively involved in the rebuttal process. The reviewer's only concern remains the small size of the dataset used in the evaluation. The authors have provided reasonable response to the concerns.\"}", "{\"summary\": \"This paper presents a framework addressing the limitations of LLMs in evaluating research ideas, focusing on stability, semantic comprehension, and objectivity. The authors introduce two core methods\\u2014GraphEval-LP (label propagation) and GraphEval-GNN (a GNN-based approach). Both are lightweight, with GraphEval-GNN incorporating a novelty detection component to assess plagiarism risks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The viewpoint-graph breaks down complex ideas into interconnected, evaluable components.\", \"weaknesses\": \"1. Some components of model lack clear explanations.\\n2. The experimental evaluation is limited; for instance, there is no ablation study, and the dataset size is small.\\n3. The construction of the graph relies solely on textual information. Evaluating a paper\\u2019s acceptance potential based exclusively on textual relevance is not entirely reasonable.\", \"questions\": \"1. In the domain of automated paper review generation, several existing works relevant to this field are missing from the discussion [1-3].\\n\\n2. The paper contains informal language in some variable names, such as \\\"max_iters\\\" and \\\"cos sim\\\".\\n\\n3. The framework represents edges between viewpoints within a single document and across multiple documents as undirected. Could the authors clarify how (or if) temporal edges are integrated into the graph?\\n\\n4. In line 288, what is $n$ when predicting the label $\\\\hat{y}$?\\n\\n5. Lines 270-274 describe the initialization process is ambiguous. The initialization node features are formed with node labels? Could the authors clarify this initialization procedure?\\n\\n6. The task tackled in this study does not appear complex but the dataset used for evaluation is quite small. Given the availability of larger, relevant datasets (e.g., provided by ReviewAdvisor [1], which covers ICLR and NeurIPS), could the authors explain why a larger dataset was not employed?\\n\\n7. What is the ratio of positive to negative samples in the dataset? The authors note that LLMs are inclined to accept most papers, yet the baseline methods report very low accuracy (below 20% on the ICLR dataset), suggesting a high prevalence of negative samples. Could the authors provide details on dataset composition and how they ensured a fair experimental comparison?\\n\\n8. The paper lacks implementation details, and no code or datasets are available.\\n\\n9. Could the authors specify the exact prompts used in the CoT and ToT baselines?\\n\\n10. GraphEval framework relies on a 7B parameter LLM while Fine-tuning BERT (with much fewer parameters) achieves comparable performance. How about the performance of fine-tuning 7B models?\\n---\\n\\n[1] Yuan, Weizhe, Pengfei Liu, and Graham Neubig. \\\"Can we automate scientific reviewing?.\\\" Journal of Artificial Intelligence Research 75 (2022): 171-212.\\n\\n[2] Du, Jiangshu, et al. \\\"Llms assist nlp researchers: Critique paper (meta-) reviewing.\\\" arXiv preprint arXiv:2406.16253 (2024).\\n\\n[3] Lin, Jialiang, et al. \\\"Automated scholarly paper review: concepts, technologies, and challenges.\\\" Information fusion 98 (2023): 101830.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to develop a graph-based LLM framework for evaluating research ideas. Specifically, the authors first break down a whole idea into multiple viewpoints (nodes) and then connect them (making edges) based on their embedding-level similarity as well as the relation extraction approach, from which the authors construct the viewpoint graph. Then, by utilizing this viewpoint graph through simple label propagation or GNNs, the proposed approach predicts the quality of the research ideas. In addition to this, the authors propose the simple trick to evaluate the novelty of the ideas, by generating the labels for them and then training the GNN with them. The authors show that the proposed approach can not only predict the quality of the research ideas better than existing methods but also it can predict the novelty of the ideas.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The proposed approach of evaluating the research ideas over the graph-structure (by breaking down them into multiple sub-ideas connected with other sub-ideas) is novel and sound.\", \"The proposed approach is superior to other research idea evaluation methods.\", \"This paper is very well-written and easy to follow.\"], \"weaknesses\": \"I don't see any major weaknesses. But, there are some points that can make this paper more solid:\\n* The authors mainly evaluate the proposed method on only the research idea evaluation task, and, while this task is very important and less explored, this point may limit its generalizability to other tasks and domains (i.e., I believe it can be well applicable to other tasks related to evaluating long text and it is worth trying it).\\n* The authors can discuss some works on evaluating the long-form text [1, 2], as some of them tend to divide the long text into multiple subsets (similar to the proposed approach) and evaluate each of them. \\n* The authors can incorporate more analyses, to showcase the efficacy of the proposed approach more. One experiment that is worthwhile to try is, to showcase the generalizability of the proposed approach (or the constructed viewpoint graph) to new papers in 2023 by training it with papers before 2022.\\n\\n---\\n\\n[1] FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation, EMNLP 2023.\\n\\n[2] Let's Verify Step by Step, arXiv 2023.\", \"questions\": \"Please see the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer d4yk (1/3)\", \"comment\": \"**Q1. The paper lacks a thorough theoretical analysis or justification for the proposed mechanisms, such as the viewpoint-graphs extraction and the plagiarism detection mechanism. Whether viewpoints extracted by LLMs are reasonable? If so, how to verify the accuracy of these viewpoints? Is there an overlap between viewpoints from different complex ideas? How are the inherent LLM issues\\u2014like hallucinations and token limitations\\u2014addressed or mitigated in this framework?**\\n\\n**Response:** Thanks for the reviewer\\u2019s insightful questions. We answer the questions step by step.\\n\\n**[Theoretical analysis or justification]** Our paper primarily focuses on an empirical analysis of how graph modeling can enhance the performance of LLMs in idea evaluation. Due to the difficulty in constructing rigorous theoretical proofs and analyses in most areas of LLM research, our paper primarily focuses on conducting extensive experimental analysis to substantiate our conclusions. \\n\\n**[Verification of the accuracy of these viewpoints]** Verifying the accuracy of these viewpoints is an open question in academia, often requiring well-grounded labeled data or time-consuming and labor-intensive human evaluation. However, this is not the main focus of our paper; our core interest lies in the evaluation of idea quality.\\n\\n**[Overlap between viewpoints]** It is possible for viewpoints from different complex ideas to overlap. In our paper, this aspect is modeled using similarity edges between viewpoint nodes, where edges between overlapping viewpoint-nodes have similarity weights closer to 1.\\n\\n**[Solution for inherent LLM issues]** In the introduction, specifically in Figures 1 and 2, we compare the performance of LLMs and GraphEval. While current LLMs may introduce hallucinations and biases when processing complex and subjective ideas, GraphEval deconstructs these ideas into simpler, more comprehensible viewpoints. Using a graph-based framework, it provides more objective reasoning, effectively mitigating illusion issues in idea evaluation. As for token limitations, we believe this is mainly the focus of papers on long-context LLMs, not the central concern of our paper.\\n\\n**Q2. The paper seems to underutilize the reasoning and generative capabilities of LLMs, which could improve GraphEval\\u2019s interpretability and effectiveness in idea evaluation tasks.** \\n\\n**Response:** Thanks for the reviewer\\u2019s constructive feedback. Indeed, we have explored the reasoning and generative capabilities of LLMs like COT/TOT in our baselines. However, the results indicate that these capabilities currently have little impact on LLMs for idea evaluation. We believe that in the future, LLMs with larger parameters will possess more powerful reasoning and generative capabilities to effectively address the idea evaluation problem. However, these larger LLMs, compared to smaller ones, will incur higher inference costs in idea evaluation. Therefore, our framework essentially aims to utilize lightweight graphs to assist smaller LLMs in viewpoint-level reasoning, thereby enabling smaller models to achieve good results in idea evaluation.\\n\\n\\n**Q3. Several typos are present in the paper. For example, In line 302, \\\"... As illustrated in Sec 3, ...\\\" lacks a period after \\\"Sec\\\". In lines 414-422, the use of macro F1 score to evaluate the accuracy is inconsistent with the content \\\"..., and micro F1 score ....\\\" (in lines 399-401).** \\n\\n**Response:** We apologize for the confusion. We have revised them in the current version: \\\"micro\\\" is corrected to \\\"macro\\\" and \\\"Sec\\\" is corrected to \\\"Sec.\\\".\\n\\n**Q4. Could the authors discuss the similarities and differences between GraphEval and GraphRAG [1]? GraphRAG breaks a document into chunks, extracts a knowledge graph from raw text, builds a community hierarchy, generates summaries for these communities, and leverages these structures in RAG tasks. This seems similar to the viewpoint-graph extraction proposed here.** \\n\\n**Response:** Thank you for your constructive questions. Indeed, \\nthe two methods seem similar, both aiming to better manage text in the form of graphs. However, they clearly differ in many aspects:\\n\\n**[Element]:** The constituent elements of GraphEval are viewpoints, whereas the constituent elements of GraphRAG are raw text chunks.\\n\\n**[Construction of the graph]:** GraphEval uses text similarity and graph algorithms, while GraphRAG employs LLMs to identify relationships, which is costly and slow.\\n\\n**[Applications]:** GraphEval is used to evaluate ideas and some hard-to-understand texts, while GraphRAG is designed for long-context QA.\\n\\n**[Technique]:** GraphEval utilizes a graph algorithm for output, whereas GraphRAG primarily uses a rag approach.\"}", "{\"title\": \"Looking forward to your reply\", \"comment\": \"Dear Reviewer oRRW,\\n\\nWe would like to express our sincere appreciation for your positive opinions and constructive review of our paper on the occasion of Thanksgiving. We apologize for intruding during your busy schedule, but as the discussion period is near its end, we would like to ensure our response aligns with your expectations and addresses your concerns. In our further response, we have **validated the good scalability and generalization capabilities of GraphEval-GNN on the large-scale ASAP-Review dataset**. We would like to know if your concerns have been adequately addressed. If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score. \\n\\nWishing you a joyful Thanksgiving,\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer oRRW (1/3)\", \"comment\": \"**Q1. In the domain of automated paper review generation, several existing works relevant to this field are missing from the discussion [1-3].**\\n\\n**Response:** Thank you for your feedback. In the introduction and related work sections of our paper, we have cited and discussed a vast array of representative works on idea/text evaluation. As a critically important field, idea evaluation is bound to attract widespread attention in academia, and inevitably there will be some works that we cannot discuss and cover. **Our paper currently focuses primarily on several LLM-based works, which represent the latest research trends in academia.** \\n\\nHowever, following the reviewer's suggestion, **we have added a discussion of these three automated paper review generation works in the related work section**: \\\"In addition, numerous studies employed fine-tuned lightweight language models (e.g., BERT (Devlin, 2018)) to evaluate complex texts, such as dialogues (Thida, 2021), tweets (Pota et al., 2021), and the novelty of ideas (Just et al., 2024; Yuan et al., 2022). Conversely, recent studies have sought to leverage the domain knowledge and logical capabilities of LLMs to create idea evaluators(Ubonsiri, 2024; Baek et al., 2024; Du et al., 2024; Lin et al., 2023a). Du et al. (2024) proposed using a prompt-based approach to allow LLMs to act as reviewers and meta-reviewers in order to assess the level of papers/ideas based on different evaluation criteria.\\\".\\n\\n**Q2. The paper contains informal language in some variable names, such as \\\"max_iters\\\" and \\\"cos sim\\\".**\", \"response\": \"Thanks for your comments. In fact, these concepts are quite basic and are often used in graph-related applications. We use this language because it facilitates the understanding of the readers. However, following the reviewer's suggestion, we have made them more formal in the revised version: \\\"Then, we compute the cosine similarity $s$ between their embeddings: $\\\\{s}(e_i, e_j) = \\\\frac{e_i \\\\cdot e_j}{\\\\|e_i\\\\| \\\\|e_j\\\\|}\\\\$. We perform multiple iterations of label propagation on graph $G$ until the labels no longer change.\\\"\\n\\n**Q3. The framework represents edges between viewpoints within a single document and across multiple documents as undirected. Could the authors clarify how (or if) temporal edges are integrated into the graph?** \\n\\n**Response:** Thanks for your insightful question. As discussed in ***[lines 212-232] of Section 3*** of the paper, due to the sparsity of the edges constructed by the prompt-based LLM, our edges are ultimately built based on the similarity between embeddings of the viewpoint-nodes, which are undirected. We are fully aware that idea evaluation is temporal, hence in ***[lines 321-344] of Section 5***, we introduced **how we incorporate temporal information into viewpoint-nodes.** Compared to integrating temporal edges into the graph, adding temporal information to the nodes\\u2019 embeddings is simpler and more practical.\\n\\n**Q4. In line 288, what is n when predicting the label y^hat?** \\n\\n**Response:** Thanks for your question. In ***[lines 200-203]***, we introduced that $n$ **represents the number of viewpoints extracted from a given research idea.** However, the distance from where y^hat\\u200b is introduced is indeed too far. We are sorry for the confusion caused. \\n\\nIn the revised version, we have added further explanation: The predicted label $\\\\hat{y}$ is then determined by selecting the dimension with the highest value in the summed vector, i.e., $ \\\\hat{y} = \\\\arg\\\\max_j \\\\left( \\\\sum_{i=1}^{k} d_i \\\\right)_j $, where $j$ indexes the dimensions of the vector and $k$ means the number of viewpoints extracted from a given research idea.\\n\\n**Q5. Lines 270-274 describe the initialization process is ambiguous. The initialization node features are formed with node labels? Could the authors clarify this initialization procedure?** \\n\\n**Response:** Thanks for the reviewer\\u2019s question. We believe the reviewer has a misunderstanding about the use of label propagation in GraphEval-LP, which is a commonly used method that is simple to deploy [[1](https://en.wikipedia.org/wiki/Label_propagation_algorithm#:~:text=Label%20propagation%20is%20a%20semi,have%20labels%20(or%20classifications)].\\n\\nIn fact, in the initialization procedure section, we do not mention node features, and the label propagation process does not involve node features either. The process of label propagation involves transferring labels from labeled viewpoint-nodes (label initialization of the training set) through the graph to unlabeled viewpoint-nodes (testing set) ***[lines 270-278]***, and aggregating the viewpoint-nodes contained in an idea ***[lines 279-283]*** to obtain the final prediction result for the idea.\"}", "{\"comment\": \"Thank you for your response. The authors clearly address my concerns and (after reading other reviews and responses as well) I will keep my score of 8: accept, good paper.\"}", "{\"title\": \"Looking Forward to Further Discussion\", \"comment\": \"Dear reviewer 5M8a,\\n\\nAs the discussion period ends soon, we would like to check whether our responses answer your questions. Following your insightful comments, we have clarified the rationale and academic value of focusing on tasks in datasets such as ICLR Papers, as well as the scalability capabilities of GraphEval-GNN. We have also conducted experiments on the long-form text evaluation task to test the generalizability of GraphEval to other domains or different types of research content and proposed a hybrid relation extraction method, which we have discussed and compared with GraphEval. Thank you again for your comments and suggestions to improve our paper, and we look forward to your reply.\\n\\nBest,\\nAuthors\"}", "{\"summary\": \"The paper presents a novel framework that leverages graph structures to enhance large language models in the evaluation of complex research ideas. The authors propose two core methods: GraphEval-LP, a label propagation-based approach, and GraphEval-GNN, a GNN model for predicting evaluation scores. The framework addresses known limitations in LLM-based evaluation, such as sensitivity to prompts and biases toward positive feedback, by breaking down ideas into viewpoint nodes and constructing graphs to propagate scores across these nodes. Experiments on two datasets demonstrate that GraphEval improves F1 scores compared to baselines, while also detecting plagiarized ideas.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of graph-based methods to enhance LLMs in research idea evaluation is innovative.\\n2. The methodology is explained in detail, with clear steps for viewpoint extraction, relation identification, and the integration of label propagation and GNN techniques.\\n3. The results demonstrate significant improvements in F1 scores over multiple baselines.\", \"weaknesses\": \"1. While the approach works well on specific datasets (ICLR Papers and AI Researcher), the generalizability to other domains or different types of research content is unclear. Given the topic of the paper is research idea validation, the experiment turned out to detect whether this paper will be accepted. The claim and experiment are a bit stretched.\\n2. The paper mentions that the LLM-based relation extraction yields sparse edges between viewpoint nodes (with a 10.73% edge density on average). This sparsity may limit the effectiveness of GraphEval in certain scenarios, particularly where relationships between ideas are less explicit. The authors address this by using BERT embeddings, but further experiments with alternative relation extraction methods could improve robustness.\\n3. While GraphEval-GNN offers significant improvements in performance, its scalability is questionable for larger datasets. Training GNNs on extensive viewpoint-subgraphs may become computationally prohibitive, especially if the graph structure grows significantly.\", \"questions\": \"Is it possible to try alternative methods for relation extraction, such as unsupervised learning or hybrid approaches, which might address the sparsity issue in graph construction, especially when dealing with less structured idea content?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Could you let us know if our rebuttal has sufficiently addressed your concerns?\", \"comment\": \"Dear Reviewer 5M8a,\\n\\nWe recognize that the timing of this discussion period may not align perfectly with your schedule, yet we would greatly value the opportunity to continue our dialogue before the deadline approaches.\\n\\nWe hope that our responses and additional experiments have effectively addressed your concerns. We truly appreciate all the valuable advice we have received, and we are pleased to share that one of the reviewers has kindly recognized our improvements by raising their score. This acknowledgment reflects the positive impact of our collaborative efforts in enhancing the quality of the paper.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\"}", "{\"comment\": \"Thanks for your response. After reading the rebuttal, I believe the authors still do not fully address my concerns. The \\\"research idea validation\\\" task is more like a reasoning task for me. It evaluates whether the main idea in a paper is valid or not. This is not equivalent to \\\"paper acceptance.\\\" Whether a paper is accepted can be based on multiple factors, like the overall presentation, experiments, etc.\\n\\nI'll increase the score by 1.\"}", "{\"title\": \"Looking Forward to Further Discussion\", \"comment\": \"Dear reviewer d4yk,\\n\\nThank you for your prompt response. Could you kindly specify the remaining concerns? We will try our best to solve them in the next few days. And we kindly request you consider further increasing the score\\n\\nThank you.\"}", "{\"title\": \"Could you let us know if our rebuttal has sufficiently addressed your concerns?\", \"comment\": \"Dear Reviewer zpR7,\\n\\nWe recognize that the timing of this discussion period may not align perfectly with your schedule, yet we would greatly value the opportunity to continue our dialogue before the deadline approaches.\\n\\nWe hope that our responses and additional experiments have effectively addressed your concerns. We truly appreciate all the valuable advice we have received, and we are pleased to share that one of the reviewers has kindly recognized our improvements by raising their score. This acknowledgment reflects the positive impact of our collaborative efforts in enhancing the quality of the paper.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\"}", "{\"title\": \"Response to Reviewer d4yk (2/3)\", \"comment\": \"**Q5. The paper does not provide comprehensive ablation and hyperparameter studies, such as the impact of different LLMs/GNNs and varying values of and (c.f., line 247 and line 255). Moreover, it is beneficial to provide an analysis of the computational complexity, efficiency, and resource requirements of the proposed framework, including training/inference time, which would be helpful in assessing its scalability and practical applicability.**\\n\\n**Response:** Thank you for the constructive feedback from the reviewers. Regarding the ablation and hyperparameter studies of the paper, we have actually discussed the selection of LLMs and the impact of different relation extraction methods in the paper. Specifically, **in Section 7.1, our experiments explored on the influence of LLMs of different sizes on the performance of idea evaluation.** We discovered that in many instances, using smaller LLMs not only cuts costs but also delivers comparable or even superior performance compared to larger models. Based on these discussions, we adopted the setting of smaller LLMs in the viewpoint extraction part of GraphEval. Additionally, **we discussed the impact and selection of relation extraction methods from the perspective of edge density in Section 3**. Indeed, because our method performs well and is robust, we did not extensively discuss specific parameter tuning and model design during the experiments. However, following the reviewers' suggestions, we conducted ablation studies on edge densities and graph neural network architectures.\\n\\n**[Impact of varying edge densities]** To explore the impact of different edge densities (as defined in Section 3, where the number of edges is divided by the proportion of all possible edge connections) on the performance of GraphEval-GNN, we selected five groups of varying edge densities for experimentation and obtained their final results, as shown in the table below. From the table, it can be observed that the performance of GraphEval-GNN initially increases and then decreases as the edge density rises. This suggests that an increase in edge density can enhance the representation of GNNs in the idea evaluation problem to a certain extent, but excessively high edge density may lead to performance degradation due to oversmoothing [1, 2]. In practical applications, we can adjust the performance at different edge densities on the validation set to enhance the robustness of our method across various tasks and datasets. All the discussions and experimental results mentioned above have been updated in Section F.3 of the Appendix and Table 24 of the revised PDF version.\\n\\n**Performance under different edge densities**\\n\\nThe performance of GraphEval-GNN initially increases and then decreases as the edge density rises.\\n\\n| Edge Density | Accuracy | Precision | Recall | F1 Score |\\n|--------------|----------|-----------|--------|----------|\\n| 5.6% | 53.2% | 14.54% | 18.77% | 19.76% |\\n| 11.4% | 64.6% | 20.52% | 26.86% | 27.77% |\\n| 22.7% | 72.2% | 35.98% | 36.74% | 43.68% |\\n| **45.5%** | **76.0%**| **38.40%**| **37.30%**| **44.80%** |\\n| 91% | 66.5% | 22.76% | 30.22% | 29.37% |\\n\\n**[Effects of various lightweight graph neural network architectures]** To compare the impact of different lightweight GNN architectures on the performance of GraphEval, we selected two classic lightweight GNN frameworks, SGC [3] and LightGCN [4], to replace the current heterogeneous graph structure in GraphEval. We named these two baselines GraphEval-SGC and GraphEval-LightGCN, respectively. We compared these baselines with GraphEval-GNN on the ICLR Papers dataset, as shown in the table below. We observed that the performance of the lightweight frameworks was inferior to that of GraphEval-GNN, which is due to their sacrifice of specific node information and simplification of the process of graph convolution modeling in order to optimize memory usage and speed. All the discussions and experimental results mentioned above have been updated in Section F.2 of the Appendix and Table 23 of the revised PDF version. In addition to the aforementioned ablation, we also evaluate the comparative impact of alternative relation extraction methods in Appendix F.3 and Table 24.\\n\\n**Performance comparison of different lightweight graph models.**\\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|--------------------|----------|-----------|--------|----------|\\n| GraphEval-SGC | 61.0% | 27.7% | 23.3% | 27.3% |\\n| GraphEval-LightGCN | 54.0% | 23.43% | 25.05% | 26.70% |\\n| **GraphEval-GNN** | **76.0%**| **38.40%**| **37.30%** | **44.80%** |\"}", "{\"title\": \"Response to Reviewer 5M8a (1/2)\", \"comment\": \"**Q1. While the approach works well on specific datasets (ICLR Papers and AI Researcher), the generalizability to other domains or different types of research content is unclear. Given the topic of the paper is research idea validation, the experiment turned out to detect whether this paper will be accepted. The claim and experiment are a bit stretched.**\\n\\n**Response:** Thanks for the reviewer\\u2019s constructive feedback. We believe that research idea validation is an important and urgent topic to study. Existing tasks or datasets for idea evaluation often rely on employing human evaluators to assign labels to ideas, a method that is not only costly but also makes it difficult to acquire large training datasets. Therefore, we focus on tasks and datasets from open reviews, such as those at ICLR. Given that ICLR reviewers are generally of high caliber, this dataset not only yields high-quality labels but also provides a substantial amount of free data. The research tasks associated with these datasets are also of widespread academic interest. Additionally, we focus on whether papers are accepted for two reasons: 1. This data is relatively easy to obtain and abundantly available; 2. Other tasks, such as predicting the average score of papers, can be subjective due to varying scoring standards among reviewers, whereas the acceptance or rejection of papers tends to be more objective. \\n\\nOn the other hand, inspired by your feedback and Reviewer zpR7's Q1, we attempt to validate our method in a long-form text evaluation task to test its generalizability to other domains or different types of research content. Specifically, we used human-annotated data from the FActScore dataset [1], where each entry contains \\\"atomic facts\\\" about celebrities generated by LLMs, along with assessments from human annotators on whether these \\\"atomic facts\\\" were supported by the materials provided to the annotators. Based on the \\\"atomic facts\\\" and human annotations from the training set, our method needed to predict the labels of \\\"atomic facts\\\" in the test set that were partitioned off. We selected topics such as Ramesses IV, Lanny Flaherty, and Florencia Bertotti, and divided the training, validation, and test sets in a 7:1:2 ratio. We compared GraphEval and some applicable baselines on this dataset in the following table. The experimental results in the table verify that our approach performs well on the long-form text evaluation task, demonstrating good adaptability to various tasks. All the discussions and experimental results mentioned above have been updated in Section E.1 of the Appendix and Table 20 of the revised PDF version.\\n\\n**Comparative performance results on the Fact Verification dataset.** Bold text denotes the best results.\\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|--------------------|----------|-----------|--------|----------|\\n| Prompted LLM (7B) | 49.79% | 57.19% | 52.27% | 47.59% |\\n| Prompted LLM (72B) | 59.52% | 63.13% | 60.35% | 56.33% |\\n| Finetuned-Bert | 70.27% | 69.74% | 68.54% | 68.64% |\\n| GraphEval-LP | 82.83% | 83.41% | 83.04% | 82.40% |\\n| **GraphEval-GNN** | **85.00%** | **90.00%** | **83.00%** | **84.00%** |\\n\\n**[1]** FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation, EMNLP 2023.\\n\\n**Q2. While GraphEval-GNN offers significant improvements in performance, its scalability is questionable for larger datasets. Training GNNs on extensive viewpoint-subgraphs may become computationally prohibitive, especially if the graph structure grows significantly.** \\n\\n**Response:** Thanks for your valuable feedback. Indeed, in some standard and mature GNN tasks such as OGB [1,2], the number of nodes and edges can reach the order of hundreds of millions. In real-world large-scale deployment scenarios like recommendation systems, GNNs can efficiently handle tens of billions of nodes and trillions of edges [3,4]. The number of nodes in our viewpoint-subgraphs often does not reach the million level, so technically, training GNNs on extensive viewpoint-subgraphs is not problematic. \\n\\nFurthermore, our paper also provides another method that performs slightly worse than GraphEval-GNN but is more efficient and computationally friendly, called GraphEval-LP. This method is based on label propagation and has a linear time complexity of $O(k \\\\cdot n \\\\cdot p) \\\\ll O(n^2)$, where $k$ is the number of iterations, $p$ is the predefined maximum degree for each node, and $n$ is the number of nodes. Therefore, as the number of viewpoint-subgraphs increases, GraphEval-LP can efficiently perform idea evaluation.\\n\\n**[1]** Open graph benchmark: Datasets for machine learning on graphs, Neurips 2020.\\n\\n**[2]** Graph neural networks: A review of methods and applications, AI Open 2020.\\n\\n**[3]** Graph convolutional neural networks for web-scale recommender systems, KDD, 2018.\\n\\n**[4]** Pregel: a system for large-scale graph processing, KDD 2010.\"}", "{\"title\": \"Response to Reviewer zpR7\", \"comment\": \"**Q1. The authors mainly evaluate the proposed method on only the research idea evaluation task, and, while this task is very important and less explored, this point may limit its generalizability to other tasks and domains (i.e., I believe it can be well applicable to other tasks related to evaluating long text and it is worth trying it).**\\n\\n**Response:** Thanks for the reviewer\\u2019s constructive feedback. We appreciate the reviewer's suggestions, which are excellent for exploring the generalization capability of our method across various tasks and domains.\\n\\nTherefore, following the reviewer's advice, **we have chosen to conduct experiments on the dataset described in [1]**. Specifically, we used human-annotated data from the FActScore dataset, where each entry contains \\\"atomic facts\\\" about celebrities generated by LLMs, along with assessments from human annotators on whether these \\\"atomic facts\\\" were supported by the materials provided to the annotators. Based on the \\\"atomic facts\\\" and human annotations from the training set, our method needed to predict the labels of \\\"atomic facts\\\" in the test set. We selected topics such as Ramesses IV, Lanny Flaherty, and Florencia Bertotti, and divided the training, validation, and test sets in a 7:1:2 ratio. We compared GraphEval and some applicable baselines on this dataset in the following table. The experimental results in the table verify that **our approach performs well on the long-form text evaluation task, demonstrating good adaptability to various tasks.** All the discussions and experimental results mentioned above have been updated in ***[Section E.1 of the Appendix]*** and ***[Table 20]*** of the revised PDF version.\\n\\n**Comparative performance results on the Fact Verification dataset.** Bold text denotes the best results. For all metrics\\u2014Accuracy, Macro Precision, Macro Recall, and Macro F1 Score\\u2014higher values indicate more precise predictions.\\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|--------------------|----------|-----------|--------|----------|\\n| Prompted LLM (7B) | 49.79% | 57.19% | 52.27% | 47.59% |\\n| Prompted LLM (72B) | 59.52% | 63.13% | 60.35% | 56.33% |\\n| Finetuned-Bert | 70.27% | 69.74% | 68.54% | 68.64% |\\n| GraphEval-LP | 82.83% | 83.41% | 83.04% | 82.40% |\\n| **GraphEval-GNN** | **85.00%** | **90.00%** | **83.00%** | **84.00%** |\\n\\n**Q2. The authors can discuss some works on evaluating the long-form text [1, 2], as some of them tend to divide the long text into multiple subsets (similar to the proposed approach) and evaluate each of them.**\\n\\n**Response:** Thanks for the reviewer\\u2019s insightful feedback. Following the reviewer\\u2019s suggestion, we add the corresponding citations and discussion in ***[Section 2]***:\\u201cRecently, some research works have been evaluating long-form texts, such as biographies of people (Min et al., 2023) and complex mathematical reasoning texts (Lightman et al., 2023). These studies divide the long text into multiple subsets and evaluate each of them. Inspired by these works, we decompose the obscure ideas into simple, understandable viewpoint nodes using LLMs, and further evaluate the idea based on graph algorithms.\\u201d.\\n \\n**Q3. The authors can incorporate more analyses, to showcase the efficacy of the proposed approach more. One experiment that is worthwhile to try is, to showcase the generalizability of the proposed approach (or the constructed viewpoint graph) to new papers in 2023 by training it with papers before 2022.** \\n\\n**Response:** Thank you for your insightful suggestions. We believe that this suggestion, like Q1, can effectively help validate the generalization ability of our method across different dimensions. \\nFollowing the reviewer's advice, **we selected papers from before 2022 in the ICLR Papers dataset as the training and validation sets, and papers from 2023 as the test set.** We compared the performance of GraphEval with other classic baselines in the following table. **The results in the table validate GraphEval's temporal generalization ability in the task of idea evaluation.** All the discussions and experimental results have been updated in ***[Section E.2]*** of the Appendix and ***[Table 21]*** of the revised PDF version.\\n\\n**Comparative performance results under the setting of idea evaluation of different years.** Bold text denotes the best results. For all metrics\\u2014Accuracy, Macro Precision, Macro Recall, and Macro F1 Score\\u2014higher values indicate more precise predictions.\\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|-------------------|----------|-----------|--------|----------|\\n| Prompted LLM (7B) | 16.67% | 20.63% | 26.12% | 18.25% |\\n| Prompted LLM (72B)| 14.29% | 11.25% | 32.47% | 11.76% |\\n| Finetuned-Bert | 48.41% | 42.46% | 36.14% | 31.57% |\\n| GraphEval-LP | 63.20% | 52.38% | 48.60% | 44.72% |\\n| **GraphEval-GNN** | **76.19%** | **48.25%** | **57.38%** | **51.32%** |\"}", "{\"title\": \"Thanks for the Reviewer\\u2019s Constructive Feedback\", \"comment\": \"Thank you for your thoughtful and constructive feedback. We are pleased to hear that our responses have addressed your concerns. We are committed to incorporating the suggested changes in our revisions to further enhance the manuscript. By the way, we also sincerely wish you a Happy Thanksgiving. Thank you for the valuable feedback and hard work you provided during the rebuttal phase.\"}", "{\"title\": \"Response to Reviewer oRRW (3/3)\", \"comment\": \"**Q10. GraphEval framework relies on a 7B parameter LLM while Fine-tuning BERT (with much fewer parameters) achieves comparable performance. How about the performance of fine-tuning 7B models?**\\n\\n**Response:** Thanks for your comments. We would first like to clarify that **Fine-tuned BERT does not achieve comparable performance to GraphEval**, as shown in ***[Tables 2 and 3]***. Compared to Fine-tuned BERT, GraphEval-GNN achieves **an absolute improvement of at least 13.8% and a relative improvement of at least 25.88% in F1 Score.** These data demonstrate that the performance enhancement of GraphEval over Fine-tuned BERT is very significant. \\n\\nAdditionally, although the 7B parameter LLM has more parameters than Fine-tuned BERT, we only utilize the inference capabilities of the 7B parameter LLM. In terms of computation cost, our lightweight GraphEval has a much greater advantage. **This is because the parameter of BERT is million-level, whereas GraphEval-LP is parameter-free and GraphEval-GNN has parameters in the thousand-level.**\\n\\n Lastly, **fine-tuning large parameter models, such as the 7B LLM, may indeed trade higher computation costs for performance advantages.** However, we believe that the LLM as a foundation model has already received widespread attention and research from the academic community for its zero-shot capabilities in idea evaluation (Si et al., 2024; Baek et al., 2024). Therefore, we argue that **rather than focusing on using large parameter models to fine-tune idea evaluation problems, it is more worthwhile for researchers to explore how a lightweight framework can further enhance the LLMs\\u2019 capabilities for idea evaluation. This is also the focus of our paper, **using a lightweight, graph-based framework to enhance the LLM's idea evaluation capabilities.**\\n\\n**Q11. The experimental evaluation is limited; for instance, there is no ablation study** \\n\\n**Response:** Thanks for your feedback. Regarding the ablation studies of the paper, we have actually discussed the selection of LLMs and the impact of different relation extraction methods in the article. Specifically, in Section 7.1, our experiments focused on the influence of LLMs of different sizes on the performance of idea evaluation. We found that in many cases, using smaller LLMs not only reduces costs but also performs comparably to larger models. Based on these discussions, we also adopted the setting of smaller LLMs in the viewpoint extraction part of GraphEval. Additionally, we discussed the impact and selection of relation extraction methods from the perspective of edge density in Section 3. Indeed, because our method performs well and is robust, we did not extensively discuss specific parameter tuning and model design during the experiments. \\n\\nOn the other hand, following the advice of other reviewers, we have added some discussions on ablation study in the ***[Appendix F]*** of our revised version. Specifically, the impact of varying edge densities is detailed in ***[Appendix F.1 and Table 22]***. Additionally, the effects of various lightweight graph neural network architectures are explored in ***[Appendix F.2 and Table 23]***. Furthermore, we evaluate the comparative impact of alternative relation extraction methods in ***[Appendix F.3 and Table 24]***.\\n\\n**Q12. The construction of the graph relies solely on textual information. Evaluating a paper\\u2019s acceptance potential based exclusively on textual relevance is not entirely reasonable.** \\n\\n**Response:** Thanks for the reviewer\\u2019s feedback. Indeed, as discussed in ***[lines 36-45]*** of our paper, the core focus of our article is on LLM-based idea evaluation, a promising hot topic in academia. Thus, similar to many existing works (Lu et al., 2024; Baek et al., 2024; Ubonsiri, 2024; Du et al., 2024; Lin et al., 2023a), we concentrate on using textual information to build our model. Additionally, in Section 3 of our paper, we discuss how different methods of viewpoint extraction can impact model performance and costs. We found that the method of textual relevance for viewpoint extraction not only significantly reduces the cost of extracting viewpoints using LLMs but also achieves effective results, which has been repeatedly verified in our experiments.\"}", "{\"title\": \"Clarification for our experiment settings and express sincere thanks for the reviewer's constructive feedback\", \"comment\": \"Thank you for the reviewer's insightful feedback and hard work. Your valuable suggestions have significantly improved our paper. However, we still want to clarify that our primary focus is on idea evaluation, and the abstracts of papers reflect the core ideas, main methods, and experiments of the entire paper, serving as a comprehensive summary that includes multiple factors. Therefore, we have chosen abstracts as the representative element for idea evaluation. We also acknowledge that other specific factors mentioned by the reviewer may influence paper acceptance. However, our paper primarily focuses on the general study of idea evaluation rather than the specific area of predicting paper acceptance. We will discuss these aspects in detail in the future work part of the revised PDF.\"}", "{\"title\": \"Please reply to the authors' response.\", \"comment\": \"Dear reviewers,\\n\\nThe ICLR author discussion phase is ending soon. Could you please review the authors' responses and take the necessary actions? Feel free to ask additional questions during the discussion. If the authors address your concerns, kindly acknowledge their response and update your assessment as appropriate.\\n\\n\\nBest,\\nAC\"}", "{\"title\": \"Kindly inquiry to revisit our responses and reconsider the score in light of the detailed clarifications and experiments provided\", \"comment\": \"Dear Reviewer oRRW,\\n\\nWe recognize that the timing of this discussion period may not align perfectly with your schedule, but we greatly value the opportunity to discuss with you, and we sincerely thank you for your valuable feedback and suggestions. **In our further response, we have validated the strong scalability and generalization capabilities of GraphEval-GNN on the large-scale ASAP-Review dataset.** We have dedicated significant time and effort to completing this experiment and hope to earn your recognition and further evaluation to address the concerns and improve the quality of our paper. We believe that we have addressed all of your previous concerns. We would greatly appreciate it if you could review our response and the revised manuscript. If you find that your concerns have been resolved, we kindly request that you reconsider the review score.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Could you let us know if our rebuttal has sufficiently addressed your concerns?\", \"comment\": \"Dear Reviewer oRRW,\\n\\nWe recognize that the timing of this discussion period may not align perfectly with your schedule, yet we would greatly value the opportunity to continue our dialogue before the deadline approaches.\\n\\nWe hope that our responses and additional experiments have effectively addressed your concerns. We truly appreciate all the valuable advice we have received, and we are pleased to share that one of the reviewers has kindly recognized our improvements by raising their score. This acknowledgment reflects the positive impact of our collaborative efforts in enhancing the quality of the paper.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\"}", "{\"title\": \"Response to Reviewer oRRW (2/3)\", \"comment\": \"**Q6. The task tackled in this study does not appear complex but the dataset used for evaluation is quite small. Given the availability of larger, relevant datasets (e.g., provided by ReviewAdvisor [1], which covers ICLR and NeurIPS), could the authors explain why a larger dataset was not employed?**\\n\\n**Response:** Thank you for your question. Indeed, our paper primarily aims to demonstrate the technical contribution of GraphEval. **We have proven that GraphEval can still perform well under limited resources and data.** Following existing research on graphs (Yang et al., 2024; Zhu et al., 2024;Shang et al., 2024), we can anticipate that GraphEval will perform better with larger datasets. Moreover, we focus on the LLM field, using graphs to assist LLMs in idea evaluation. **Currently, most LLM-based idea evaluation methods are based on prompts. These methods do not improve in performance as the dataset size increases.** Therefore, to ensure a fair comparison, we have chosen lightweight but representative datasets for our experiments.\\n\\n**Q7. What is the ratio of positive to negative samples in the dataset? The authors note that LLMs are inclined to accept most papers, yet the baseline methods report very low accuracy (below 20% on the ICLR dataset), suggesting a high prevalence of negative samples. Could the authors provide details on dataset composition and how they ensured a fair experimental comparison?** \\n\\n**Response:** We appreciate the reviewer's insightful comments. In our paper, we have shown the ratio of positive to negative samples of two datasets and the details on dataset composition in ***[Table 5 of Appendix]***. To ensure a fair experimental comparison, we made certain that all methods used consistent data ***[Table 5 of Appendix]*** and hyperparameters ***[Table 7 of Appendix]***. Furthermore, according to the task description in ***[lines 348-352]***, idea evaluation is a multi-class prediction problem. Moreover, the label distribution of the ICLR Papers Dataset closely aligns with the acceptance rate of the ICLR conference, where only about 30% of submissions are accepted. Therefore, although LLMs are inclined to accept most papers, the prompt-based LLM approach performs poorly due to significant bias in its evaluation.\\n\\n**Q8. The paper lacks implementation details, and no code or datasets are available.** \\n\\n**Response:** Thanks for your feedback. We answer your question step by step.\\n\\n**[Implementation details]** We believe that we have provided a fairly comprehensive introduction to the implementation details in various sections of the paper. The implementation details of the method are extensively introduced in the paper through detailed formulas, such as Equations (3), (4), and (5), along with corresponding descriptions and the pseudocode process of Algorithm 1 ***[lines 324-336]***. Additionally, we have listed the hyperparameter settings for GraphEval implementation in the appendix ***[Table 7]***, as well as the prompts used in GraphEval ***[Table 4 and Table 6]***. Moreover, we have thoroughly introduced the implementation of the baseline methods ***[lines 365-412]*** and enumerated all the detailed prompts used by the baseline methods ***[Table 8-19]***. Regarding the datasets and tasks, we have detailed their information in the paper ***[lines 354-362]*** and also illustrated the process of viewpoint extraction from a research idea and the specific data proportion structure in ***[Appendix A and Table 5]***. \\n\\nMoreover, in order to further improve our implementation details, we have followed the Reviewer d4yk's suggestions in Q2 and added a discussion of implementation details in ***[lines 425-431]*** of the revised version, including computational complexity, efficiency, and resource requirements.\\n\\n**[Availability of code and datasets]** Given the comprehensive details about the method implementation and datasets presented in our paper, we can ensure the reproducibility of our work. The current code and data are being further organized to enhance their readability and usability. Since ICLR does not mandatorily require the submission of data and code, we look forward to releasing the code and data in the near future.\\n\\n**Q9. Could the authors specify the exact prompts used in the CoT and ToT baselines?** \\n\\n**Response:** Thanks for your question. Actually, we have presented the exact prompts used in the CoT and ToT baselines in ***[Table 11 and 12]*** of Appendix respectively.\"}", "{\"title\": \"Could you let us know if our further response has sufficiently addressed your concerns?\", \"comment\": \"Dear Reviewer oRRW,\\n\\nWe hope that our further responses and additional experiments have effectively addressed your concerns. We truly appreciate all the valuable advice we have received, and we are pleased to share that other reviewers have kindly recognized our improvements by significantly raising their scores. This acknowledgment reflects the positive impact of our collaborative efforts in enhancing the quality of the paper.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for addressing my comments.\\n\\nThe authors have made efforts to respond to each of my concerns, but I believe there is still room for improvement to substantiate the effectiveness of the proposed approach. My primary concern remains the small size of the dataset used in the evaluation, which limits the generalizability of the findings.\\n\\nI have updated my evaluation accordingly but will maintain my recommendation for this paper. I will participate in the discussion with other reviewers and the rebuttal will be considered during the discussion.\"}", "{\"title\": \"Further response to Reviewer oRRW\\u2019s concern on scalability generalization\", \"comment\": \"Thanks for the reviewer\\u2019s feedback. We want to reclaim the rationality of the settings of using our current dataset. As mentioned in our response to Q6, our paper primarily aims to demonstrate the technical contributions of GraphEval, and to ensure a fair comparison with LLM baselines, we have chosen the current dataset for our experiments. However, we follow the reviewer's suggestion and have conducted experiments on a large-scale dataset. Specifically, we conducted experiments on the ASAP-Review dataset [1]. The ASAP-Review dataset is an open peer review dataset that includes 5,192 ICLR papers from 2017-2020 obtained through OpenReview and 3,685 NeurIPS papers from 2016-2019 accessed through NeurIPS Proceedings. A detailed introduction to this dataset, along with its composition, can be found in Section 3.1 and Table 2 of [1]. Similar to the settings described in Section 6 of our paper, we used the abstracts of all papers in the dataset as inputs and the review decisions of the papers as the predicted labels, which included Accept (Oral), Accept (Spotlight), Accept (Poster), and Reject. We divided the dataset into training, validation, and test sets in the proportions of 70%, 10%, and 20%, respectively. It is important to note that for NeurIPS papers, since only accepted papers are included and no specific labels such as Oral, Spotlight, or Poster and ratings are provided, we have to assign all paper labels as Accept (Poster). This approach ensures the accuracy of the dataset because over 85% of the papers accepted at the NeurIPS conference are designated as posters. As shown in the table below, we compared the performance of GraphEval-GNN with that of Fine-tuned BERT and Prompted LLM on this dataset. We observed that GraphEval-GNN still maintains the best performance on this large-scale dataset, with an accuracy 9.8% better than the strongest baseline, Fine-tuned BERT. Furthermore, although the rare labels of Accept (Oral) and Accept (Spotlight) (less than 4%) make it difficult for all methods to perform well in terms of macro F1 score, GraphEval-GNN still achieved an 8% improvement in macro F1 score compared to Fine-tuned BERT. These observations demonstrate the robust generalization capability of GraphEval-GNN on large-scale datasets. All the discussions and experimental results mentioned above have been updated in Section G of the Appendix and Table 25 of the revised PDF version.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\\n\\n**Comparative Performance Results for Different Models on the ASAP-Review Dataset.** Bold text denotes the best results. For all metrics\\u2014Accuracy, Macro Precision, Macro Recall, and Macro F1 Score\\u2014higher values indicate more precise predictions. \\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|---------------------|----------|-----------|---------|----------|\\n| Prompted LLM (7B) | 22.00% | 11.04% | 28.57% | 12.83% |\\n| Prompted LLM (72B) | 4.00% | 4.00% | 17.86% | 3.04% |\\n| Finetuned-Bert | 61.17% | 29.81% | 30.37% | 29.86% |\\n| **GraphEval-GNN** | **67.02%** | **33.11%** | **32.86%** | **32.20%** |\\n\\n**[1]** Yuan, Weizhe, Pengfei Liu, and Graham Neubig. \\\"Can we automate scientific reviewing?.\\\" Journal of Artificial Intelligence Research 75 (2022): 171-212.\"}", "{\"title\": \"Response to Reviewer 5M8a (2/2)\", \"comment\": \"**Q3. The paper mentions that the LLM-based relation extraction yields sparse edges between viewpoint nodes (with a 10.73% edge density on average). This sparsity may limit the effectiveness of GraphEval in certain scenarios, particularly where relationships between ideas are less explicit. The authors address this by using BERT embeddings, but further experiments with alternative relation extraction methods could improve robustness. Is it possible to try alternative methods for relation extraction, such as unsupervised learning or hybrid approaches, which might address the sparsity issue in graph construction, especially when dealing with less structured idea content?**\\n\\n**Response:** Thanks for the reviewer\\u2019s insightful feedback. We answer your question from two aspects.\\n\\n**[The validity and robustness of embedding similarity method]** In fact, the relation extraction method based on embedding similarity can not only construct the viewpoint-node relationship efficiently and at low cost but also can flexibly cope with various tasks by adjusting different edge densities to improve its robustness across different tasks. Specifically, as the response to Q2 of Reviewer d4yk, to explore the impact of different edge densities (as defined in Section 3, where the number of edges is divided by the proportion of all possible edge connections) on the performance of GraphEval-GNN, we selected five groups of varying edge densities for experimentation and obtained their final results, as shown in the table below. From the table, it can be observed that the performance of GraphEval-GNN initially increases and then decreases as the edge density rises. This suggests that an increase in edge density can enhance the representation of GNNs in the idea evaluation problem to a certain extent, but excessively high edge density may lead to performance degradation due to oversmoothing [1, 2]. In practical applications, we can adjust the performance at different edge densities on the validation set to enhance the robustness of our method across various tasks and datasets.\\n\\n**Performance under different edge densities**\\n\\nThe performance of GraphEval-GNN initially increases and then decreases as the edge density rises.\\n\\n| Edge Density | Accuracy | Precision | Recall | F1 Score |\\n|--------------|----------|-----------|--------|----------|\\n| 5.6% | 53.2% | 14.54% | 18.77% | 19.76% |\\n| 11.4% | 64.6% | 20.52% | 26.86% | 27.77% |\\n| 22.7% | 72.2% | 35.98% | 36.74% | 43.68% |\\n| **45.5%** | **76.0%**| **38.40%**| **37.30%**| **44.80%** |\\n| 91% | 66.5% | 22.76% | 30.22% | 29.37% |\\n\\n**[Hybrid relation extraction method]** We also followed the reviewer's suggestion and tried to propose a hybrid relation extraction method named Hybrid to compare with our fully similarity-based approach, GraphEval. Specifically, the hybrid method uses Prompted LLMs mentioned in Section 3 to connect nodes within viewpoint-subgraphs, while the edges between viewpoint-subgraphs are still based on similarity. The results of the two relation extraction methods on the ICLR Papers dataset are presented in the following table, showing that GraphEval-GNN performs better than Hybrid. This might be due to the difficulty of ensuring adequate edge density when connecting nodes within viewpoint-subgraphs using Prompted LLMs. Additionally, this connection method may increase the likelihood of hallucinations produced by LLMs and increase the token cost of LLMs, thus affecting the final impact on idea evaluation and the actual expenses. All the discussions and experimental results mentioned above have been updated in Section F.3 of the Appendix and Table 24 of the revised PDF version.\\n\\n**Performance comparison of GraphEval-GNN via two different alternative relation extraction methods.**\\n\\n| Model | Accuracy | Precision | Recall | F1 Score |\\n|----------------|----------|-----------|--------|----------|\\n| Hybrid | 62.0% | 25.08% | 27.60% | 25.46% |\\n| **GraphEval-GNN** | **76.0%** | **38.40%** | **37.30%** | **44.80%** |\\n\\n**[1]** Measuring and relieving the over-smoothing problem for graph neural networks from the topological view, AAAI 2020 \\n\\n**[2]** A survey on oversmoothing in graph neural networks, arXiv preprint 2023.\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"Thanks for the authors' comprehensive evaluation of viewpoint-graphs extraction from two perspectives, which effectively addressed my concerns. Additionally, could authors release your related code, which ensures the reproducibility of this paper?\\n\\nI am happy to see the paper accepted (raising my score from 6 to 8).\"}", "{\"title\": \"Further response to Reviewer d4yk\\u2019s concern on evaluation of viewpoint-graphs extraction\", \"comment\": \"Thanks for the reviewer\\u2019s constructive feedback. We want to reclaim that we acknowledge that the evaluation of viewpoint-graphs extraction is very important, but the main focus of our paper is on how graph modeling can enhance the performance of Large Language Models (LLMs) in idea evaluation through a series of empirical analyses. However, we follow the reviewer's suggestion and have conducted experiments to evaluate the accuracy of viewpoint-graphs extraction. Specifically, we explore from two perspectives. First, we use a prompt-based approach [1,2], allowing a large LLM to assess whether each viewpoint is consistent with the original idea. Specifically, we employ the [LLaMa-3.1 (405b) LLM](https://ai.meta.com/blog/meta-llama-3-1/), which has shown excellent performance in evaluation tasks, as the evaluator. Using the prompt from Table 1 below, we evaluate the consistency between the viewpoint and the idea, with an output of 1 indicating consistency and 0 indicating inconsistency. We calculate the proportion of samples judged consistent and average this across all samples to determine the consistency rate. We finally achieve consistency rates of 99.47% and 99.82% for the ICLR Papers and AI Researcher datasets, respectively. These rates, very close to 100%, demonstrate the high degree of consistency between the generated viewpoints and the original ideas as achieved by our method.\\n\\nAdditionally, we measure the accuracy of viewpoints from an entity-level perspective. Specifically, we first aggregate the constructed viewpoints and then assess their entity-level accuracy with respect to the idea using entity-level factual consistency metrics [3]. We report the results on the datasets ICLR Papers and AI Researcher in Table 2 below. From the table, we can observe that the entity-level Precision, Recall, and F1 Score between the viewpoints and the idea exceed 0.9 on both datasets, which also validates the accuracy and rationality of our viewpoints. All the discussions and experimental results mentioned above have been updated in Section H of the Appendix and Table 26-27 of the revised PDF version.\\n\\nCould you let us know if your concerns have been adequately addressed? If you find that your concerns have been resolved, we would appreciate it if you could reconsider the review score.\\n\\nThanks!\\n\\n**Table 1: Prompt Template of Viewpoint Accuracy Evaluation.**\\n```\\n[Instruction]: \\nDecide if the following Viewpoint, derived from the idea, is consistent with the Idea. Note that consistency means all information in the viewpoint is fully supported by the idea.\\n\\n[Input]:\", \"idea\": \"{idea}\", \"viewpoint\": \"{viewpoint}\\n\\n[Output]: \\nExplain your reasoning step by step, identifying if each part of the viewpoint aligns with the idea, then answer: Is the viewpoint consistent with the idea? Answer with only 1 for yes or 0 for no.\\n```\\n\\n**Table 2:Performance of entity-level factual consistency metrics for ICLR Papers and AI Researcher datasets.**\\n\\n| Dataset | Precision | Recall | F1 Score |\\n| -------------- | --------- | ------ | -------- |\\n| ICLR Papers | 0.9339 | 0.9288 | 0.9314 |\\n| AI Researcher | 0.9472 | 0.9004 | 0.9232 |\\n\\n**[1]** ChatGPT as a Factual Inconsistency Evaluator for Text Summarization, arXiv 2023.\\n\\n**[2]** Human-like Summarization Evaluation with ChatGPT, arXiv 2023.\\n\\n**[3]** Entity-level Factual Consistency of Abstractive Text Summarization, ACL 2021.\"}", "{\"summary\": \"This paper proposes GraphEval, a lightweight graph-based LLM framework for idea evaluation, which utilizes LLM's summarization and abstraction capabilities for improving idea evaluation. Through the viewpoint graph extraction, label propagation or GNN algorithms, and plagiarism detection mechanism, GraphEval breaks down complex ideas into multiple simple viewpoints, connects the viewpoints into an entire viewpoint-graph, and evaluates ideas via label propagation or a weighted GNN. Extensive experiments on two datasets demonstrate that GraphEval can significantly enhance the accuracy of idea evaluation while effectively detecting plagiarized ideas to provide a fair evaluation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper introduces an innovative method for idea evaluation with a lightweight graph-enhanced LLM framework, addressing the inherent complexities and subjectivity of this task.\\n2. This paper breaks down complex ideas into simpler viewpoints to construct viewpoint-graphs and evaluates ideas with label propagation or GNN algorithms under the low computation and API costs. Meanwhile, the proposed plagiarism detection mechanism integrates the temporal information for more accurate and fair idea evaluation.\\n3. The paper is well-written and organized, with a clear problem setup, methodology, and experimental evaluation.\", \"weaknesses\": \"1. The paper lacks a thorough theoretical analysis or justification for the proposed mechanisms, such as the viewpoint-graphs extraction and the plagiarism detection mechanism. Whether viewpoints extracted by LLMs are reasonable? If so, how to verify the accuracy of these viewpoints? Is there an overlap between viewpoints from different complex ideas? How are the inherent LLM issues\\u2014like hallucinations and token limitations\\u2014addressed or mitigated in this framework?\\n2. The paper does not provide comprehensive ablation and hyperparameter studies, such as the impact of different LLMs/GNNs and varying values of $k$ and $m$ (c.f., line 247 and line 255). Moreover, it is beneficial to provide an analysis of the computational complexity, efficiency, and resource requirements of the proposed framework, including training/inference time, which would be helpful in assessing its scalability and practical applicability. \\n3. The paper seems to underutilize the reasoning and generative capabilities of LLMs, which could improve GraphEval\\u2019s interpretability and effectiveness in idea evaluation tasks.\\n4. Several typos are present in the paper. For example, In line 302, \\\"... As illustrated in Sec 3, ...\\\" lacks a period after \\\"Sec\\\". In lines 414-422, the use of **macro** F1 score to evaluate the accuracy is inconsistent with the content \\\"..., and **micro** F1 score ....\\\" (in lines 399-401).\", \"questions\": \"1. Could the authors discuss the similarities and differences between GraphEval and GraphRAG [1]? GraphRAG breaks a document into chunks, extracts a knowledge graph from raw text, builds a community hierarchy, generates summaries for these communities, and leverages these structures in RAG tasks. This seems similar to the viewpoint-graph extraction proposed here.\\n2. In the left of Figure 1, are the colors of the positive and negative prompts correctly marked? Specifically, is \\\"If a paper is good or you are unsure, give it good scores and accept it.\\\" intended as a negative prompt?\\n\\n**Reference** \\n[1] From Local to Global: A Graph RAG Approach to Query-Focused Summarization, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"The authors\\u2019 rebuttal addressed most of my concerns, so I have raised my score from 5 to 6.\"}" ] }
5RPpwW82vs
MutualNeRF: Improve the Performance of NeRF under Limited Samples with Mutual Information Theory
[ "Zifan Wang", "Jingwei Li", "Yitang Li", "Yunze Liu" ]
This paper introduces MutualNeRF, a framework enhancing Neural Radiance Field (NeRF) performance under limited samples using Mutual Information Theory. While NeRF excels in 3D scene synthesis, challenges arise with limited data and existing methods that aim to introduce prior knowledge lack theoretical support in a unified framework. We introduce a simple but theoretically robust concept, Mutual Information, as a metric to uniformly measure the correlation between images, considering both macro (semantic) and micro (pixel) levels. For sparse view sampling, we strategically select additional viewpoints containing more non-overlapping scene information by minimizing mutual information without knowing the ground truth images beforehand. Our framework employs a greedy algorithm, offering a near-optimal solution for this task. For few-shot view synthesis, we maximize the mutual information between inferred images and ground truth, expecting inferred images to gain more relevant information from known images. This is achieved by incorporating efficient, plug-and-play regularization terms. Experiments under limited samples show consistent improvement over state-of-the-art baselines in different settings, affirming the efficacy of our framework.
[ "nerf", "mutual information", "sparse view sampling", "few-shot view synthesis" ]
Reject
https://openreview.net/pdf?id=5RPpwW82vs
https://openreview.net/forum?id=5RPpwW82vs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u9DxKWIrMj", "u5XZEk21Jj", "tGEjGNfwE9", "rLcEGUqJt6", "jhvn1ucBnH", "fziSXVFhzr", "Z0maAVV8YW", "ULGtJInbRL", "TvN91JxxDT", "Srh0lHq9V4", "PTVXbtEaMu", "MkdfxxBVI1", "960zMcXWKc", "2SCXZ9DJ2o", "1YQGzvnPQh", "020GNDSdn9" ], "note_type": [ "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1730712994510, 1730683345065, 1732000346550, 1730451857196, 1733195595471, 1731999771291, 1730717294321, 1731999804516, 1737523422696, 1733097110073, 1732635935399, 1733097129417, 1732364626851, 1732515993063, 1734852179986, 1731999735636 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission910/Reviewer_ryB4" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_Hj8j" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_BHX7" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_Hj8j" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_MJVb" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_BHX7" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ], [ "ICLR.cc/2025/Conference/Submission910/Reviewer_ryB4" ], [ "ICLR.cc/2025/Conference/Submission910/Area_Chair_nrfe" ], [ "ICLR.cc/2025/Conference/Submission910/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a framework to improve the performance of sparse view sampling and few-shot view synthesis by mutual information. For sparse view sampling, the authors first calculate the semantic space distance and pixel space distance, then use a greedy algorithm to select sparse views for training. For few-shot view synthesis, the authors add two regularization terms based on semantic space distance and pixel space distance, to improve the performance of view synthesis. The authors conduct detailed experiments to demonstrate the effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors demonstrate the effectiveness of both sparse view sampling and few-shot view synthesis. Using mutual information to select views for NeRF is reasonable.\", \"weaknesses\": \"1. The motivation for pixel space distance is not clarified. According to Definition 3, the pixel space distance is the expectation of distance between any two points of rays. The authors should clarify the motivation behind Definition 3 since it is important for the following parts. The semantic space distance is fairly reasonable.\\n\\n2. If my understanding is correct, for the few-shot view synthesis, the authors just added two regularization terms to the NeRF training. However, the relationship between mutual information and few-shot view synthesis is not clear. For sparse view sampling, minimizing mutual information is reasonable.\\n\\n3. For the few-shot view synthesis, the proposed method needs to evaluate the semantic distance between randomly rendered images and ground truth images. Therefore, this will bring additional training costs. The analysis should be provided.\\n\\n4. Compared with NeRF, 3D Gaussian splatting is much better in both training and rendering. The authors should conduct experiments based on 3DGS.\\n\\n5. There are many typos and grammatical errors. For example, formulas should come with indices.\", \"questions\": \"please refer to the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces MutualNeRF, a framework that enhances Neural Radiance Field (NeRF) performance under limited sample conditions using Mutual Information Theory. It addresses sparse view sampling and few-shot view synthesis by minimizing and maximizing mutual information, respectively. The framework employs a greedy algorithm for viewpoint selection and plug-and-play regularization terms for training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Utilizes mutual information theory to improve NeRF under limited data.\\n\\nProposes a greedy algorithm for strategic viewpoint selection in sparse view sampling.\\n\\nIntroduces efficient regularization terms to enhance few-shot view synthesis.\", \"weaknesses\": \"1.\\tHow about using mutual information for 3DGS?\\n2.\\tWhat is the rendering speed of the proposed method, especially when compared with 3DGS?\\n3.\\tI suggest that the authors compare or discuss with more methods, like PixelNeRF[A], CR-NeRF[B], and MVSGaussian[C], to fully verify the effectiveness of the proposed mutual information.\\n4.\\tWhy choose to pick images instead of training as a whole? Are there images that do not belong to the target scene? If we use a fast reconstruction method like 3DGS, training all images won't take much time.\\n\\n**Reference**\\n\\n[A] Yu, Alex, et al. \\\"pixelnerf: Neural radiance fields from one or few images.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\\n\\n[B] Yang, Yifan, et al. \\\"Cross-ray neural radiance fields for novel-view synthesis from unconstrained image collections.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[C] Liu, Tianqi, et al. \\\"MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We extend our gratitude to the reviewer for the comments and suggestions. Below, we address the primary concerns that have been raised.\\n\\n\\n\\n>Q1: The proposed method introduces more complex training requirements and utilizes a large model, CLIP, for assessing semantic space distance, which may impact training time and peak memory consumption.\\n\\n**A1:** We have explored **alternative semantic metrics** to quantify \\\"image similarity,\\\" such as DINOv2[1] and MAE[2], as encoders in our loss computation. Below are the comparative results with NeRF on the Blender dataset:\\n\\n| Method | PNSR \\u2191 | SSIM\\u2191 | LPIPS \\u2193|\\n| -------- | -------- | -------- | -------- |\\n| NeRF | 14.934 | 0.687 | 0.318 |\\n| NeRF + CLIP (vit-base-patch32) | 22.503 (+7.569) | 0.823 (+0.136) | 0.124(-0.194) |\\n| NeRF + DINOv2 (dinov2-base) | 22.882(+7.948)|0.830(+0.143) |0.119(-0.199) |\\n| NeRF + MAE (MAE-ViT) | 21.652(+6.718) | 0.798(+0.111) | 0.165(-0.153)\\n\\n\\nOur findings indicate that different encoders exhibit similar performance improvements, demonstrating that **our framework is generalizable** and not heavily rely on clip to calculate semantics. \\n\\n>Q2: Figure 4 shows that some regions, such as the stairs, are not improved. Why might this be the case? Could this indicate limitations in the Mutual Information design? Specifically, which types of regions can be enhanced by the macro and micro losses? Currently, it is unclear which specific issues the proposed framework addresses. Could the authors provide a more detailed explanation?\\n\\n\\n**A2:** Thank you for pointing out the limitation shown in Figure 4. The observed lack of improvement in certain regions, such as the stairs, might be due to insufficient feature alignment between the synthetic and reference views in these areas. This issue may arise because such regions often contain intricate geometric or high-frequency texture details that are challenging for the mutual information-based design to fully capture, particularly when the macro and micro features have lower correspondence.\\n\\nOur proposed framework leverages macro and micro mutual information losses to enhance overall structure and fine-grained detail consistency, respectively. The macro loss is effective for improving large-scale structural features by maximizing high-level correspondence, whereas the micro loss helps to align local fine details, which might be missing or mismatched across views. We plan to investigate improved methods for enhancing intricate regions, such as adaptive feature extraction or incorporating geometry-aware constraints to better align these areas.\\n\\n>Q3: Does the proposed macro loss also improve the performance of 3DGS-based methods? If so, did the authors experiment with sparse-view 3DGS splatting methods?\\n\\n**A3:** Our paper primarily focuses on NeRF synthesis with limited training data. Due to the limited time for the rebuttal, we are unable to directly conduct experiments related to 3DGS. We believe extending this approach to 3DGS is a valuable direction for future work.\\n\\n>Q4: What would happen if we applied CLIP as a perceptual loss to support FreeNeRF training? \\n\\n**A4:** Applying CLIP as a perceptual loss to support FreeNeRF training could potentially improve the semantic consistency between the generated and real images, as CLIP's feature space is trained to align well with human perception. We think incorporating CLIP as a perceptual loss might introduce additional computational complexity, as observed with our integration of CLIP in the NeRF framework. In future experiments, we plan to further explore the application of CLIP to FreeNeRF to assess its effectiveness in improving the synthesis quality under sparse-view settings.\\n\\nWe thank the reviewer once again for the valuable and helpful suggestions. \\n\\n**References**\\n\\n[1] Oquab M, Darcet T, Moutakanni T, et al. Dinov2: Learning robust visual features without supervision[J]. arXiv preprint arXiv:2304.07193, 2023.\\n\\n[2] He K, Chen X, Xie S, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 16000-16009.\"}", "{\"summary\": \"This paper proposes utilizing Mutual Information Theory to enhance the performance of NeRF with limited samples. Mutual information serves as a metric to measure the correlation between images at both macro and micro levels. The macro perspective focuses on correlations in semantic features, while the micro perspective addresses correlations in the pixel space. By incorporating mutual information, the proposed framework effectively addresses challenges posed by sparse view sampling and few-shot view synthesis, resulting in improved performance. Extensive experiments demonstrate the effectiveness of assessing both semantic and pixel space distances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Introducing mutual information to enhance Neural Radiance Field (NeRF) performance is an innovative idea.\\n2. The experimental results in Table 3 show that the proposed method improves performance across various NeRF frameworks.\\n3. This paper is well-organized.\", \"weaknesses\": \"1. Results in Figure 4 show limited improvement between FreeNeRF and the proposed MutualNeRF; only the zoomed-in sections reveal marginal enhancements, which are still not substantial.\\n2. The proposed method introduces more complex training requirements and utilizes a large model, CLIP, for assessing semantic space distance, which may impact training time and peak memory consumption.\\n3. Performance improvements become marginal with more powerful baseline frameworks. For instance, \\\"RegNeRF + Ours\\\" surpasses RegNeRF by 1.28 PSNR, while \\\"FreeNeRF + Ours\\\" outperforms FreeNeRF by only 0.50 PSNR.\", \"questions\": \"1) Figure 4 shows that some regions, such as the stairs, are not improved. Why might this be the case? Could this indicate limitations in the Mutual Information design? Specifically, which types of regions can be enhanced by the macro and micro losses? Currently, it is unclear which specific issues the proposed framework addresses. Could the authors provide a more detailed explanation?\\n\\n2) What are the comparisons in terms of training time and peak memory consumption?\\n\\n3) Table 3 and Table 4 show that the performance gain for \\\"FreeNeRF + Ours\\\" is marginal. This raises concerns about whether MutualNeRF will improve performance for more advanced backbones, such as Sparsenerf [A] or Sparsenerf combined with FreeNeRF. Could the authors comment on this?\\n\\n4) Does the proposed macro loss also improve the performance of 3DGS-based methods? If so, did the authors experiment with sparse-view 3DGS splatting methods, such as [B]?\\n\\n5) What would happen if we applied CLIP as a perceptual loss to support FreeNeRF training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their response. First, the data setup and training speed in the paper still lack a clear explanation. Second, a comparison with 3DGS is essential. Therefore, I will maintain my rating.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We express our gratitude to the reviewer for the insightful comments. We address the main concerns below. If the reviewer believes there are additional issues with our paper that have not been addressed, we are very willing to continue the discussion.\\n\\n>Q1: The motivation for pixel space distance is not clarified. According to Definition 3, the pixel space distance is the expectation of distance between any two points of rays. The authors should clarify the motivation behind Definition 3 since it is important for the following parts. The semantic space distance is fairly reasonable.\\n\\n**A1:** We appreciate the reviewer's feedback. Our intuition is mainly based on the idea that capturing light rays from different positions of an object provides richer information about it. Since the object varies continuously internally, we measure the richness of the information provided by evaluating the differences at every point along two light rays.\\n\\n>Q2: The relationship between mutual information and few-shot view synthesis is not clear. \\n\\n**A2:** This paper primarily focuses on providing a new perspective for understanding sampling under limited data. Sparse view sampling and few-shot synthesis are just two application scenarios we explored. More application scenarios can be provided in future work.\\n\\n\\n>Q3: For the few-shot view synthesis, the proposed method needs to evaluate the semantic distance between randomly rendered images and ground truth images. Therefore, this will bring additional training costs. The analysis should be provided.\\n\\n**A3:** We did not add this loss at every iteration. In fact, applying it every 100 iterations yields very good results, with the time overhead being less than 1.2 times.\\n\\n>Q4: Compared with NeRF, 3D Gaussian splatting is much better in both training and rendering. The authors should conduct experiments based on 3DGS.\\n\\n**A4:** Our paper primarily focuses on NeRF synthesis with limited training data. Due to the limited time for the rebuttal, we are unable to directly conduct experiments related to 3DGS. We believe extending the idea of mutual information to 3DGS is a valuable direction for future work.\\n\\n\\n\\nWe would be delighted to address any further inquiries you may have. Please feel free to reach out with any additional questions or concerns!\"}", "{\"summary\": \"The paper proposes MutualNeRF, a method to improve the performance of Neural Radiance Fields (NeRF) when training samples are limited. The authors integrate mutual information theory to develop a unified approach, enhancing NeRF's effectiveness in sparse view sampling and few-shot view synthesis, by modeling uncertainty in semantic space distance and pixel space distance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper is well written and easy to follow.\", \"The framework\\u2019s design is comprehensive, considering both macro (semantic) and micro (pixel) perspective in the task of sparse view sampling and few-shot NVS.\"], \"weaknesses\": [\"**Complex methodology with marginal gains:** This methodology introduces significant complexity, especially in sparse view sampling, involving greedy algorithms and complex mutual information metrics. However, the observed improvements over simpler baselines are relatively minor, which may not justify the added complexity.\", \"**Lack of novelty.** The attempt to address the task of few-shot novel view synthesis through minimization of mutual information between viewpoints have already been explored, especially in CVRP 2022 paper **InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering**. The methodology of this paper seems to be re-interpretation of ideas used in DietNeRF (semantic space) and InfoNeRF (pixel space) within the perspective of mutual information theory. I ask the authors to give a more thorough theoretical comparison between this work and the methods that I have mentioned.\"], \"questions\": \"See the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the comments and constructive suggestions. In the following, we address the main concern raised. Please find the details below.\\n\\n>Q1: How about using mutual information for 3DGS? \\n\\n**A1:** Our paper primarily focuses on NeRF synthesis with limited training data. Due to the limited time for the rebuttal, we are unable to directly conduct experiments related to 3DGS. We believe extending the idea of mutual information to 3DGS is a valuable direction for future work.\\n\\n\\n>Q2: What is the rendering speed of the proposed method, especially when compared with 3DGS?\\n\\n\\n**A2:** We did not add this loss at every iteration. In fact, applying it every 100 iterations yields very good results, with the time overhead being less than 1.2 times.\\n\\n\\n\\n>Q3: Why choose to pick images instead of training as a whole? Are there images that do not belong to the target scene? \\n\\n\\n**A3:** The issue of limited acquisition budget is **well addressed** in ActiveNeRF[4] presented at ECCV 2022. The main idea of their paper is that NeRF usually requires a large number of posed images and generalize poorly with limited inputs. And it takes a whole observation in the scene to train a well-generalized NeRF. This poses challenges under real-world applications such as robot localization and mapping, where capturing training data can be costly, and perception of the entire scene is required. \\n\\nOur approach **strictly follows the acknowledged setup in the domains of active learning and NeRF**, ensuring a fair comparison with state-of-the-art (SOTA) methods. \\n\\n\\n\\nFinally, we thank the reviewer once again for the effort in providing us with valuable suggestions. We will continue to provide clarifications if the reviewer has any further questions.\\n\\n\\n**References**\\n\\n[1] Yu, Alex, et al. \\\"pixelnerf: Neural radiance fields from one or few images.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\\n\\n[2] Yang, Yifan, et al. \\\"Cross-ray neural radiance fields for novel-view synthesis from unconstrained image collections.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[3] Liu, Tianqi, et al. \\\"MVSGaussian: Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[4] Pan X, Lai Z, Song S, et al. Activenerf: Learning where to see with uncertainty estimation[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 230-246.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks again for your valuable feedback! Could you please let us know whether your concerns have been addressed? We are happy to make further updates if you have any other questions or suggestions.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response. However, I noticed that some of my questions were not addressed directly. For instance, there is no comparison provided regarding training time or peak memory consumption. Additionally, the experiments in the rebuttal are not convincing, as the base method used is NeRF alone. I wonder why more advanced NeRF variants, such as FreeNeRF or RegNeRF, were not utilized for comparison.\\n\\nBased on these concerns, I tend to maintain my score.\\n\\nBest regards,\"}", "{\"comment\": \"Thanks again for your valuable feedback! Could you please let us know whether your concerns have been addressed? We are happy to make further updates if you have any other questions or suggestions.\"}", "{\"comment\": \"We would like to express our sincere gratitude for the reviewer's constructive suggestions and comments. Since the deadline is approaching, we sincerely hope the reviewers can read our response. Please let us know if the reviewers have any comments about our response or any other additional concerns. We are eager to provide any further clarifications and discussions to help the evaluation.\"}", "{\"title\": \"Re: Rebuttal by Authors\", \"comment\": \"The reviewer expresses gratitude to the authors for their rebuttal.\\n\\nHowever, I am still a bit confused about the motivation behind \\\"the richness of the information provided by evaluating the differences at every point along two light rays\\\". I recommend that the authors present a clearer explanation. Besides, The current lack of experiments related to 3DGS makes the proposed approach less significant at this time.\\n\\nTherefore, I will maintain my rating.\"}", "{\"metareview\": \"This paper received negative feedback from the reviewers, who expressed concerns regarding the lack of novelty and the limited comparisons. Specifically, the use of mutual information for few-shot novel view synthesis has been explored in previous methods. The AC has carefully reviewed all the feedback and the authors' rebuttal and agrees that the current version is not yet ready for acceptance. The AC strongly encourages the authors to address all the reviewers' concerns and resubmit to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"There were no reviewer discussions.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We greatly appreciate the reviewer's comments and valuable suggestions.\\n\\n>Q1: This methodology introduces significant complexity, especially in sparse view sampling, involving greedy algorithms and complex mutual information metrics. However, the observed improvements over simpler baselines are relatively minor, which may not justify the added complexity.\\n\\n**A1:** We appreciate the reviewer's feedback. The primary goal of this paper is to provide a novel perspective on the problem of sampling under limited data. We identified specific application scenarios within two NeRF setups and demonstrated performance improvements over baseline algorithms in both cases. Additionally, more application scenarios can be explored in future work. \\nWe followed the settings of ActiveNeRF. Since we only need to compute CLIP similarity and camera distance, the time overhead is only 1/5 of that of ActiveNeRF.\\n\\n\\n>Q2: The attempt to address the task of few-shot novel view synthesis through minimization of mutual information between viewpoints have already been explored, especially in CVRP 2022 paper InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering. The methodology of this paper seems to be re-interpretation of ideas used in DietNeRF (semantic space) and InfoNeRF (pixel space) within the perspective of mutual information theory. I ask the authors to give a more thorough theoretical comparison between this work and the methods that I have mentioned.\\n\\n**A2:** We thank the reviewer's comment. First, we provide two application scenarios: sparse view sampling and few-shot synthesis. For the second scenario, our approach is entirely different from that in the InfoNeRF[1]. InfoNeRF uses Shannon Entropy to define the entropy of a discrete ray density function, while we base our method on concepts from mutual information, which involve significant conceptual differences.\\n\\n\\n**References**\\n[1] Kim, Mijeong, Seonguk Seo, and Bohyung Han. \\\"Infonerf: Ray entropy minimization for few-shot neural volume rendering.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\"}" ] }
5Qxx5KpFms
Breaking Neural Network Scaling Laws with Modularity
[ "Akhilan Boopathy", "Sunshine Jiang", "William Yue", "Jaedong Hwang", "Abhiram Iyer", "Ila R Fiete" ]
Modular neural networks outperform nonmodular neural networks on tasks ranging from visual question answering to robotics. These performance improvements are thought to be due to modular networks' superior ability to model the compositional and combinatorial structure of real-world problems. However, a theoretical explanation of how modularity improves generalizability, and how to leverage task modularity while training networks remains elusive. Using recent theoretical progress in explaining neural network generalization, we investigate how the amount of training data required to generalize on a task varies with the intrinsic dimensionality of a task's input. We show theoretically that when applied to modularly structured tasks, while nonmodular networks require an exponential number of samples with task dimensionality, modular networks' sample complexity is independent of task dimensionality: modular networks can generalize in high dimensions. We then develop a novel learning rule for modular networks to exploit this advantage and empirically show the improved generalization of the rule, both in- and out-of-distribution, on high-dimensional, modular tasks.
[ "scaling laws", "modularity", "neural network", "generalization", "compositionality", "combinatorial generalization" ]
Accept (Poster)
https://openreview.net/pdf?id=5Qxx5KpFms
https://openreview.net/forum?id=5Qxx5KpFms
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wKRcg6Yj8U", "lidSjWCsNc", "lBOSQQXFTK", "b3R2HOZbep", "WJM5rAeazQ", "MJFjxLXsrm", "LX1tOmzlod", "IJqrOX3Eaf", "HqILmnbtZd", "HVZ0cfipGi", "GQtGYLDabk", "EUvaFrGQ4a", "8tVLOgnKig", "8mNsRZeAJr", "79xpN8yCFG", "458889POda" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_review" ], "note_created": [ 1732497594641, 1730618232562, 1731623280225, 1732651632255, 1732654925448, 1731623258011, 1730152314240, 1731623209051, 1730678576375, 1732223569694, 1732480467689, 1731623162345, 1732658014645, 1734705081128, 1737523403759, 1730492342373 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_vQzj" ], [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_vQzj" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_MzHa" ], [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_2nhB" ], [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_MzHa" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_2nhB" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_uK9v" ], [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Authors" ], [ "ICLR.cc/2025/Conference/Submission565/Area_Chair_bRCB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission565/Reviewer_uK9v" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response.\", \"q1\": \"In fact, the double descent phenomena can occur both with respect to larger capacity $p$ and dataset size $n$: test error can increase with $n$ in certain regimes. Nakkiran et al. 2020 and Schaeffer et al. 2023 both provide empirical examples of this, and this is also supported by theory (e.g. D'Ascoli et al., 2020, Rocks et al. 2022). In summary, this is because the large spike in test error when $p \\\\approx n$ can lead to non-monotonic behavior of the test loss with respect to $n$. We are happy to provide further references on this point.\", \"q2\": \"Certainly, we agree that the $\\\\hat{y}_j$ are not independent. However, given that each $\\\\hat{y}_j$ is a function of a separate subspace $\\\\hat{U}_j^T x$ of the original $x$, we believe it is reasonable to approximate the effect of each module $\\\\hat{y}_j$ as being independent. On the other hand, if all the modules were perfectly correlated, then the appropriate scaling factor would be $\\\\frac{1}{K}$ instead.\\n\\nWe emphasize that this argument is simply made to justify the scaling factor in Equation (3); in fact, any scaling factor can be used here (including no scaling factor) and will not change the validity of our theory.\", \"q3\": \"We understand your concern regarding equation 4: it may seem strange that on the left hand side, the expression is a function of $Ux$ while on the right hand side, there is no $Ux$.\\n\\nWe again emphasize that we are making a modeling assumption in equation (4): we are assuming the function is linear in $U$. This, in fact, *directly implies* the right hand side for some choice of features $\\\\varphi$. We do not claim that equation 4 holds in general without this linearity assumption.\"}", "{\"summary\": \"The paper shows that the sample complexity associated with training modular neural networks is independent (under certain conditions) of the input dimensionality and does not follow the same exponential increase with input dimension as in the case of monolithic or traditional neural networks (NNs).\\n\\nFirst, the authors present a derivation of training and test error in monolithic NNs when the task is non-modular. The task and the NN are modeled linearly based on features generated using the input. The authors then, empirically validate that NNs with different architectures (loosely) match the theoretical trends when varying the input dimension, number of samples and the number of parameters. Note that the task considered for this experiment is modular. \\n\\nThe authors then theoretically compute the training and generalization errors of modular NNs given that the underlying task is also modular with the same structure. Each module is associated with a small NN (modeled linearly) and an input projection mechanism that reduces the module input dimensionality. The task is also modeled in the same way where the parameters are randomly initialized. \\n\\nThe resulting closed form solution shows that the training error is independent of the input dimension and the test error under the condition of under-parametrization is also independent of the input dimension. This result hinges on the input projection associated with each module, where the dimensionality associated with the overall input is reduced. The authors then propose a method to initialize (or learn) the parameters associated with the input projections. Once initialized the modular NNs are trained end-to-end. \\n\\nEmpirical results show that modular NNs that are learned using the proposed initialization mechanism achieve significant improvements over monolithic NNs and modular NNs conventionally trained.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The theoretical derivations are sound and very well done, and the paper is well written. The authors did a good job showcasing how modular networks can be related to monolithic networks (based on certain linear assumptions).\\n\\nIntuitively, modeling of the data, and learning of parameters related to module input bottlenecks make sense. This is similar to learning the connectivity associated with module input or limiting the number of module inputs to avoid module collapse. \\n\\nThe experiments show a clear trend that the input projection mechanism results in better performance and sample complexity, as compared to monolithic NNs and end-to-end trained modular NNs.\", \"weaknesses\": \"The major weakness of the paper is the consideration of a single layer of modules and data generating system. In such a system the output is a linear composition of the module outputs. This may not be true for many real world systems where multiple such modular layers can exist in a hierarchy.\", \"questions\": \"Continuing with the previous weakness, the algorithm to learn or initialize the input projection parameters may not work in such a case as it is dependent on the initial module NN weights.\\n\\nThe generalization performance for the compositional CIFAR-10 experiment can be divided into input class permutations present in the training data vs. not present in the training data to further dissect the difference between monolithic NNs and modular NNs. The sample complexity experiments with compositional CIFAR-10 tasks are not present and should be added to further strengthen the claims. \\n\\nFor individual tasks considered, there appears to be a large amount of tuning of methodology to train the modular NNs and the module input projections. (Referring to appendix)\\n\\nHow would the solution to training and test errors change if the input projections from each module were removed, and the modules considered the input x in its entirety? This is consistent with the current mixture-of-exert (MoE) models. \\n \\nIs there a validation set used for the experiments or is the generalization performance reported from the last training iteration?\\n\\nDo the modular NNs treat the number of modules as a hyper-parameter and tune it to improve the performance. Or is it an architectural characteristic such as the width or depth in monolithic NNs that is fixed ? \\n\\nThe CIFAR-10 experiments are run only for a single epoch, will increasing the number of epochs result in better performance for networks?\", \"minor\": \"Equations are referred to the appendix when they are also present in the main part of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We really appreciate your positive feedback on our submission and your suggested directions for improvement.\\n\\n**Fixed compute budget**\\n\\nThis is an important question of great practical relevance.\\nWe have not tried any experiments in which the compute budget for the randomly initialized vs. pre-initialized networks are fixed. In fact, in this work, we do not at all optimize our kernel-based modular learning rule for computational efficiency.\\n\\nHowever, we note that approximating kernel methods for computational efficiency is a well-established field of research which can reduce the computational cost of kernel methods down to linear time in the size of a dataset (such as using random Fourier features). We expect that combining these methods with our proposal can help make our method significantly more competitive with random initialization under a fixed compute budget.\\n\\n**Mismatch between architectural module count and task module count**\\n\\nThis is an insightful point.\\n\\nIn our experiments, in general, the number of architectural modules is always greater than the number of modules in the task: thus, *these do not need to match*. For example, in the case of Compositional CIFAR-10, we fix the number of architectural modules to 32 and vary the number of task modules from 1 to 8. Figure 10 shows an additional experiment demonstrating the effect of choosing different numbers of architectural modules on the sine wave regression task. We find that adding more architectural modules helps performance even beyond the number of task modules.\\n\\n**Relevance to modern architectures**\\n\\nWe appreciate this point as well. In section 5, we have expanded our discussion of how our results apply to modern architectures.\\n\\nIn terms of practical relevance, we believe our results show that when training modular architectures, more emphasis should be placed on the optimization methods used to train the modules. Naively applying gradient descent may not be sufficient for effective training of these architectures. Our results also suggest a potential theoretical basis for why modularity is so effective in practice: namely, modularity breaks up a large high-dimensional task into easier to learn low-dimensional tasks.\"}", "{\"comment\": \"I would like to thank the authors for their clear responses.\\n\\nI still believe (also after reading the rest of the reviews -- and the authors' responses) that this is an important and solid paper that will be valuable to others if published. \\n\\nOf course, mostly because of the restrictive assumptions that I also mention in the review, I cannot describe the paper as \\\"groundbreaking\\\" -- and for that reason I prefer to keep my score as is.\"}", "{\"comment\": \"Thank you to the authors for engaging with my questions in such depth.\", \"modularity_definition\": \"I'm still confused about how attention is a modular architecture by your definition of modularity (which you describe it as in your first paragraph), since, while attention mechanisms share weights between sequence elements, they do not have the property of modularity described here where the input features are explicitly subdivided and passed to different components\\n\\nWhile the authors have added more detailed explanation of both the confusing formulation of the feature matrix and the training/test loss in the comment here, the paper itself does not seem to have had these clarifications added, so my issue with the paper on that front still stands. I realize that this feature matrix formulation might be standard for those with specific familiarity with the Jacot paper mentioned, but I believe it is still quite unintuitive to the average ML researcher or practitioner, and without explanation this conceptual merging of features and parameters will continue to be confusing (if the feature matrix already has an output dimension of d, what is the W matrix doing? Is it just a dxd matrix mapping within output space? The shape of W is not explicitly stated, making this unclear). \\n\\nI have increased my score slightly, since the authors provided evidence that the benefits shown do not require the `k` of the underlying data to be known ahead of time. However, I still think on the whole this paper is confusing, unclear in its notation, and hard to read or reason about.\"}", "{\"comment\": \"Thank you for your detailed review and for highlighting areas where our work can be strengthened. We address your concerns point by point below.\\n\\n**Novelty of theoretical results**\\n\\nWe understand that projecting high-dimensional data into lower-dimensional subspaces is a well-known technique. However, our contribution lies in providing a *quantitative analysis* of how modular architectures affect sample complexity in neural networks.\\n\\n\\nOur work is the first to derive explicit, non-asymptotic expressions quantifying how modular architectures can circumvent the exponential sample complexity associated with high-dimensional inputs. This provides a theoretical foundation for designing modular networks that are efficient in practice. \\n\\nWhile our analysis assumes linearity and specific modular structures, these simplifications are essential for analytical tractability. Future work can build upon our framework to explore more general, nonlinear architectures and investigate how modularity affects their generalization properties.\\n\\n**Scalability of the proposed algorithm**\\n\\nYou raise a valid concern regarding scalability.\\n\\nWe note that approximating kernel methods for computational efficiency is a well-established field of research which can reduce the computational cost of kernel methods down to linear time in the size of a dataset (such as using random Fourier features). We do not explore these methods in this work as it is orthogonal to our contributions; however, we believe combining these well-known methods with our proposal is a fruitful direction.\\n\\n**Potential ensemble effects**\\n\\nThis is an interesting point.\\n\\nIn our compositional CIFAR-10 experiments, however, the modules all have the same weights (they are weight tied). Thus, the improved performance of the modular architecture cannot be attributed to an ensemble effect in this case.\\n\\n**Overclaiming and empirical comparisons**\\n\\nWe apologize if any claims appeared overstated.\\n\\nWhile our theoretical model captures the general trends observed empirically, certain deviations occur due to factors not accounted for in the simplified model, such as optimization dynamics. We discuss these factors in Section 3.3.\\n\\nWe are happy to soften any specific claims in our submission to match our empirical results.\\n\\n**Comparison to benchmark results**\\n\\nWe emphasize that Jarvis et al. tests on Compositional MNIST while we test on Compositional CIFAR-10, a significantly harder task. Thus, their performance metrics cannot be compared with ours.\\n\\n**Confusion about test error increasing with larger datasets**\\n\\nWe understand the confusion.\\n\\nIn Figure 1, the increase in test error with more data reflects the double descent phenomenon (Belkin et al.). This is a now well-established phenomenon in which test error increases with the number of training points until the interpolation threshold is reached (training points = number of parameters). Beyond this point, the test error decreases with the number of training points; this is also clearly illustrated in Figure 1. We are happy to provide further references on this phenomenon if helpful.\\n\\n\\n**Independence of $y_j$**\\n\\nIn equation 3, we wish to rescale the summation such that the left hand side has constant magnitude as the number of modules varies. As a heuristic, if we treat each of the terms in the summation as independent under the assumption that $x$ is a random variable, we then find that the variance of the summation scales with the number of modules $K$. Thus, it is natural to divide the summation by $\\\\sqrt{K}$ to normalize.\\n\\n**Equation 4 clarification**\\n\\nThat's correct: as mentioned in the text before and after equation 4, we make a linearizing assumption on the parameters of the model. We assume that the model output is a linear function of the model parameters (which now include both the module parameters and the module input projection). This is, in fact, the same assumption made in Section 3.\\n\\n**Equation 15 clarification**\\n\\nEquation 15 simply solves equation 14 for $\\\\theta$. Note that equation 14 is a linear equation in $\\\\theta$. It is well known that the minimum norm solution for a variable in a linear equation (if a solution exists) can be computed by the pseudoinverse (denoted by $\\\\dagger$ in our notation): the solution to $Y=X \\\\theta$ minimizing $||\\\\theta||$ is $\\\\theta = X^\\\\dagger Y$. We are happy to provide further references if helpful.\\n\\n**Mixture of Experts paper reference**\\n\\nThank you for pointing this out. We have fixed this in our latest revision.\"}", "{\"summary\": \"In this paper, the authors study modular neural networks and their ability to learn functions that are themselves modular insofar as they have either a compositional nature or a modular composition in their statistics. The authors first provide a theorem outlining expected scaling laws for sample complaxitiy and trainign accuracy in a curated linear task setup. They then corroborate the predictions from this theorem with numerical experiments. Following this, they present a task setup with explicit ground-truth modular structure and prove a second theorem outlining similar sclaing propoerties for this novel tasks setup along with a particual modular NN architecture. The result is reduced scaling complexity for tusch tasks when NNs are appropriately modular. The authors then propose an initialization scheme for NNs which first learn module initialization in a self-supervised fashion using task statistics. Finally, the authors show that this approach behaves as expected on non-trivial tasks such as compositional CIFAR, and that the proposed method works well on other modular architectures beyond the one used for their theoretical results.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper presents an excellent set of theoretical results outlining expected scaling laws for modular networks when task have a modular structure. It also presents a practical initialization scheme for potential models that is based on self-supervised alignment with task statistics. Lastly, it validates predictions with non-trivial and relevant experiments with architectures that go beyond the ones used for theory, outlining the potential generality of the result.\", \"weaknesses\": \"The paper is sound. There are potntials for improvements in two key areas:\\n\\n1. In experiments, it is unclear if the generalization advanatage of the modular networks remain if one factors in the pre-training (i.e. learning module initialization). In other words, for the same total compute, would a monolithic model do as well as the modular one for which some of the compute budget went toward initialization? My apologies if I missed it, but this is a key point that would factor into real-world scaling laws.\\n\\n2. What happens if there is a missmatch between the task modularity and the NNm odule count? In experiments, the models could accurately recover tasks modularity when the number of modules is known. In contrast, in other tasks, this is not known a priori. How would the models behave in this mismatched environment? To be fair, the authors acknowledge this issue, but I wonder if some rapid experiments could outline expected behaviors in this case. Once again, apologies if this has been explored and I missed it.\\n\\n3. While this is no doubt very relevant for the field, it's overall impact with respect to modern architectures remains unclear. Some discussion about the use of such methods in modern settings such as in the prsence of attention could greatly enhance the scope of the result. This is not necessary however, as this is a good paper in the current scope.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We really appreciate your thoughtful and positive feedback on our submission. We address the points you raised below:\\n\\n**How to extend to setting with hierarchical modules**\", \"we_think_this_is_an_excellent_point\": \"indeed, our paper only considers a relatively simple form of modularity in which we use a single layer of modules and the module outputs are linearly combined to form the model output. As we note in the discussion section, practical modular architectures are often more complex, involving some level of hierarchy. We believe our results are a step in the direction of more theoretical analysis of how modularity can benefit generalization more broadly.\\n\\n\\nWe do note however, that our experimental results include the case of nonlinear module projections (in which the module inputs are nonlinear projections of the task inputs). This is a step towards generalizing our results to more practical settings.\\n\\n**Concerns on CIFAR-10 experiment**\\n\\nThis is a nice suggestion. As we note in Appendix E though, given the very large number of possible class permutations ($10^k$ where $k$ is the number of images), we expect that for large $k$, all class permutations in the test data will be unseen. However, in Table 3 we explicitly test accuracies in the case where test inputs have distinct class combinations and find similar results.\\n\\nFor this task, we opt to use accuracy (fixing training samples) instead of sample complexity (fixing accuracy) as our performance metric since it is more natural to treat the Compositional CIFAR-10 dataset as unlimited (given the combinatorial number of samples that can be drawn). \\n\\n**Comments on tuning**\\n\\nOur experiments involve a number of hyperparameters. Unless otherwise mentioned, we set the hyperparameters to the most natural choice without tuning them. For certain hyperparameters, we do sweep over different values, with the sweep range indicated in Appendix E. We are happy to clarify any specific hyperparameter choices if needed.\\n\\n**Experiment with no input projection**\\n\\nWe appreciate this interesting suggestion. We note however, that in our Compositional CIFAR-10 experiment, all modules have the same weights (they are weight-tied). Thus, if they all had the same inputs, their outputs would also be the same. The final model output is the concatenation of the module outputs which in this case would perform very poorly.\\n\\n**Comment on validation set**\\n\\nFor both the Compositional CIFAR-10 and sine wave regression tasks, we use a separate validation set. Note that for the sine wave regression task, each input is drawn completely independently from an infinite sized dataset; thus, any points not trained on can be treated as validation data.\\n\\n**Number of architectural modules**\\n\\nWe treat the number of architectural modules as a (potentially tunable) architectural characteristic. It is fixed (at 32) for the Compositional CIFAR-10 task and tuned over for the sine wave regression task. Figure 10 illustrates performance for different choices of number of architectural modules for the sine wave regression ask.\\n\\n**Number of epochs for Compositional CIFAR-10 task**\\n\\nIn this task, the number of possible training inputs can be treated as virtually infinite (for large $k$). Thus, we believe it is more practically reasonable to fix number of epochs as one rather than repeat training on the same input multiple times.\\n\\n**Comment on equation references**\\n\\nThank you for pointing this out. We have fixed this in our latest revision.\"}", "{\"summary\": \"This paper (1) constructs a simplified theoretical model of generalization (focused on the case of linear regression from what I believe to be a set of not-strictly-defined features), (2) provides empirical demonstrations that, at least in broad strokes, a sine wave regression task follows the predictions made by that model, (3) argues that in cases where the simple problem under consideration is modular (here meaning that it has k modules of size b which interact, rather than a full set of P features), better sample complexity can be obtained by using an explicitly modular parameter structure, and (4) provides some empirical support for this generalization behavior on another set of (this time modular) sine wave regression tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper focuses on an empirical task that lets them validate their ideas, but is realistic in terms of scope\", \"The paper engages with an interesting problem of the structure of weights that allow for better generalization\"], \"weaknesses\": [\"The paper's notation, particularly in the initial presentation of the theoretical model, was confusing and felt under-explained. In particular I was confused by feature matrix (how did it get constructed from the inputs? What assumptions are being made about it? Why is it a matrix to begin with rather than a feature vector), and this confusion made it hard to understand future claims made in the paper (especially since the central claim was about the effect on generalization of _input_ dimension, which is mediated by the function implied in this feature matrix)\", \"The forms of the expected training and test loss could have been broken down in a clearer and more intuitive way, rather than simply being presented as not-very-comprehensible formulas\", \"This paper assumes that the only way to benefit from the generalization behavior of modularity is to have explicitly modular structure; it would have been interesting if it had also engaged with whether modular data gives you generalization benefits without a parallel parameter structure (since in practice modern models seem to generalize well without the benefit of this)\"], \"questions\": [\"What is the explicit definition of modularity being used? This concept was referenced without ever being really explicitly defined in a general-but-still-technical sense (and attention was given as an example). I ended up being confused about whether the focus was on independently functioning parts of a network, or shared weights in a more general sense .\", \"As mentioned in \\\"Weaknesses\\\": what assumptions are made in general about the structure of the feature matrix? It is indicated in the modular version of the model that arbitrary nonlinear transforms of the input are considered as valid feature matrices, but this isn't clarified for the first treatment of the model\", \"Do the benefits cited require being correct about the number (k) of modules in the underlying data? How much do positive results depend on being correct in your choice of k relative to what is present in the underlying data?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"thank you for the response. I am happy with the arguments and maintain my score.\"}", "{\"title\": \"unsatisfactory answers to questions\", \"comment\": \"Q1: I wrote that I was concerned that \\\"test error INCREASES with larger datasets\\\" in figure 1 and you answered that this is the double descent phenomenon. From what I understand, the double descent phenomenon is about test error decreasing with larger capacity p, not dataset size n.\", \"q2\": \"Your answer to explain why we should treat the \\\\hat{y}_j's as independent does not make sense to me, sorry. All the \\\\hat{y}_j's are deterministic functions of the same x, so they are highly dependent, and thus not independent random variables.\", \"q3\": \"My question was not about linearity wrt theta but about \\\\hat{y} being a function of U x on the LHS but not on the RHS.\"}", "{\"comment\": \"Thank you for your thoughtful review and for highlighting areas where our paper can be improved. We appreciate your insights and address your concerns point by point below.\\n\\n**Confusion with notation and explanation of the feature matrix**\\n\\nWe apologize for the confusion caused by the notation in our theoretical model, especially regarding the feature matrix. In our model, the feature matrix arises from applying a feature mapping to the input data, transforming each input $x$ lying in $m$ dimensions into a higher-dimensional feature matrix $\\\\phi(x)$ lying in $d \\\\times P$ dimensions. Here, $d$ denotes the number of output dimensions of the model and $P$ denotes the number of features per output dimension. We do not make any particular assumptions about exactly how $\\\\phi$ is constructed (it can be arbitrarily nonlinear and complex)- the main property it must satisfy is that the features $\\\\phi(x)$ are distributed as a Gaussian.\\n\\nThe reason why we use a feature matrix instead of a flattened feature vector is that it allows us to easily account for non-scalar output sizes ($d > 1$) with a fixed number of parameters $P$. Another option to account for multi-dimensional outputs is to have $d \\\\times P$ parameters (one set of parameters per each output dimension) while the number of features is fixed at $P$. This is also valid; however, our choice is more consistent with some prior literature (e.g. Jacot et al.).\\n\\n\\nWe are happy to revise any particular points of confusion in our revision.\\n\\n**Clarity of expected training and test loss formulas**\", \"we_clarify_the_forms_in_theorem_1_as_follows\": \"the test loss consists of three terms. The first term is largest near the interpolation threshold ($dn \\\\approx p$) and decreases away from it- this causes a spike near the interpolation threshold. Intuitively, this corresponds to overfitting to noise in the training data. The second term is negative and corresponds to the reduction in loss caused by capturing information about the true model. In the overparameterized regime, it linearly grows in magnitude with the number of training points, and in the underparameterized regime it is constant with amount of training data (as information in more training points can't be captured by the model). The last term is a constant offset corresponding to the loss when the number of parameters approaches infinity.\", \"the_training_loss_is_a_product_of_two_terms\": \"the first corresponds to the information about the underlying target function not captured by the model and the second corresponds to the amount of training data available that can't be captured by the model. If we have more parameters than training data, then the training loss is zero since all the information can be captured. Otherwise, there will be information loss proportional to the excess training data times the remaining information about the target function.\\n\\nWe are happy to revise any particular points of confusion in our revision.\\n\\n**Does modular data alone help without a modular architecture?**\\n\\nThe reviewer raises an interesting question about whether it is possible to benefit from merely modular data without a modular architecture. Our experiments on Compositional CIFAR-10 (see Figure 4) suggest the answer is no: a non-modular architecture performs relatively worse on higher-dimensional modular tasks than a modular architecture. Of course, it is possible that the non-modular architecture has learned the modular structure of the task to some extent (just not as well as a modular architecture) and we think this is an interesting future direction to explore.\\n\\n**Definition of modularity used**\\n\\nWe appreciate the need for a clear definition of modularity.\\n\\nIn this paper, we define modularity as the decomposition of a task or function into distinct, independently operating components or modules. Each module processes a subset of the input dimensions and contributes to the final output in a compositional manner. This structure allows for specialized processing within modules and recombination to solve complex tasks. Our modular neural network architecture mirrors this by having separate subnetworks (modules) that handle specific input projections.\\n\\nWe also highlight that in other literature, modularity may be used in other, potentially different ways.\\n\\n**Dependency on correct number of modules**\\n\\nThis is an insightful question.\\n\\nIn our experiments, in general, the number of architectural modules is always greater than the number of modules in the task: thus, *these do not need to match*. For example, in the case of Compositional CIFAR-10, we fix the number of architectural modules to 32 and vary the number of task modules from 1 to 8. Figure 10 shows an additional experiment demonstrating the effect of choosing different numbers of architectural modules on the sine wave regression task. We find that adding more architectural modules helps performance even beyond the number of task modules.\"}", "{\"comment\": \"Thank you for your continued engagement with our work and for providing additional feedback. We are grateful that you have increased your score and appreciate the opportunity to further clarify and improve our paper.\\n\\n**Modularity Definition and Attention Mechanisms**\\n\\nWe apologize for the confusion regarding our inclusion of attention mechanisms as an example of modular architectures. Our intent was to illustrate that certain aspects of attention mechanisms exhibit modular properties, but we understand that this may not have been clear within the context of our specific definition.\\n\\nWe define modularity as the decomposition of a task or function into distinct, independently operating components (modules), each processing a subset of the input features and contributing to the final output in a compositional manner. In self-attention, each output token is computed as a linear combination of input tokens, with the coefficients of the linear combination being the computed self-attention \\\"weights\\\" (computed via softmax). In many NLP and CV tasks, the self-attention \\\"weights\\\" are sparse: each output token is primarily sensitive to only a small number of input tokens. In this sense, we may consider self-attention to be performing modular computation.\\n\\n**Clarity of Notation and Feature Matrix Explanation**\\n\\nWe apologize for not incorporating the detailed explanations provided in our response into the paper itself. It is important to us that the paper is clear and accessible to all readers, regardless of their familiarity with prior work. We have revised the paper to include the clarifications regarding the feature matrix. Specifically, we have:\\n\\n- Clearly defined how the feature matrix is constructed from the input data, including any assumptions made about its properties.\\n- Explicitly stated the shapes and roles of all matrices involved, including the feature matrix $\\\\varphi(x)$ and the weight vector $W$, to eliminate ambiguity.\\n- Provided an explanation of why we use a feature matrix instead of a feature vector.\\n\\nRegarding your question about the feature matrix already having an output dimension of $d$ and the role of the weight vector $W$, we realize that the notation may have been confusing. In our formulation, the feature matrix $\\\\varphi(x)$ maps the input $x$ to a feature space of dimension $d P$. The weight vector $W$ then maps these features to the output space, and has dimensions $P \\\\times 1$. The output is computed as $y = \\\\varphi(x) W$. We have made sure to clearly state the dimensions of all quantities and explain how they interact to produce the output.\\n\\nOnce again, we appreciate your feedback and are committed to making our paper as clear and comprehensible as possible. We believe that the revisions we made will address your concerns and enhance the overall quality of our work.\"}", "{\"metareview\": \"This paper derives a theoretical quantitative analysis of the generalization abilities of modular architectures. The analysis shows that under certain assumptions (linearity, specific modular structure of architecture and data) models can avoid exponential sample complexity associated with high dimensional inputs. The paper corroborates this with empirical result on two toy tasks.\\n\\nReviews agree that the studied problem is interesting, but disagree substantially about the relevance of the results. The main concern is that the assumptions might be too restrictive to apply to realistic cases and modern architectures. The authors defend their paper as a stepping stone towards better understanding of modular generalization. \\n\\nFrom the reviews it seems clear that the results will be of interest to at least a part of the community. I lean towards counting being controversial as a potential advantage in this case, and therefore recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"A big part of the discussion focussed on clarity of the theory, both in terms of notations and definitions. The authors have incorporated this feedback to improve and clarify their paper, but not all reviewers are satisfied with the presentation.\\n\\nSeveral questions about the paper such as the dependence on the number of modules, were raised and answered satisfactorily by the authors.\\n\\nApart from clarity the main open concern is about the relevance of the results. While uK9v remains unconvinced, vQzj and 2nhB are convinced that these results are impactful.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The authors present both theoretical and algorithmic results regarding the generalization ability of a particular form of modular architectures. They first highlight theoretical results on generalization results showing exponentially large sample complexity as a function of the input dimension. Then they present a particular class of modular architectures as a sum of experts and assume that the training data was generated by this same architecture. They show that thanks to each module making a low-dimensional projection before processing the data, the scaling behavior is better behaved. Then they present a kernel-style algorithm to initialize such an architecture, to be then fine-tuned by usual supervised learning and SGD. Finally, they show results on a toy 1-dim sine-wave regression task and on the recently introduced compositional MNIST task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Much remains to be understood about the generalization behavior of neural nets, especially the types that have a modular architecture, so advances on the theory (in special cases) that are presented do seem useful.\", \"weaknesses\": \"(1) the theoretical results are not suprising: projecting the m-dim input to several b-dim low-dimensional representations unsurprisingly reduces the exponential badness from m to b. The theory is also of fairly limited scope, with lots of unreasonable assumptions (e.g., of linearity wrt parameters) that may not tell us as much as we would like for more general forms of modular architectures.\\n\\n(2) the proposed algorithm is unlikely to scale well in terms of computational efficiency beyond small-size problems and into frontier AI, given the use of kernel methods in the novel part of the method\\n\\n(3) the fact that all modules are initialized independently and using the same (randomized) procedure suggests that a significant part of the advantage could come from an ensemble effect (which always helps generalization)\\n\\n(4) the paper seems to overclaim in multiple places, e.g., suggesting that their results tracks empirical behavior of modern neural nets (even the empirical comparisons don't match the theory, e.g., fig 2 bottom right).\\n\\n(5) I did not find numerical comparisons against benchmark results from other papers, and when I look at Jarvis et al 2023, their figures show much lower errors. Hence the experimental results may not be that good after all.\", \"questions\": \"(1) I was confused by the results in figure 1, whereby test error INCREASES with larger datasets. This seems incompatible with empirical observartions and traditional statistical analyses of generalization.\\n\\n(2) I did not understand in what sense the y_j could be considered independent (and what is the random variable), after eq 3.\\n\\n(3) eqn 4 seems wrong: on the LHS y is a function of the linear projection U x, whereas on the RHS U and x only interact via the presumably non-linear function phi.\\n\\n(4) why should we expect eqn 15 to give the minimum norm solution? (i.e. why is it a solution and why is it minimum norm, from what class)\\n\\n(5) You should add the citation of the original mixture of expert papers (e.g. Jacobs et al 1991).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5MNJKgaj54
ScaLES: Scalable Latent Exploration Score for Pre-Trained Generative Networks
[ "Omer Ronen", "Ahmed Imtiaz Humayun", "Richard Baraniuk", "Randall Balestriero", "Bin Yu" ]
We develop Latent Exploration Score (LES) to mitigate over-exploration in Latent Space Optimization (LSO), a popular method for solving black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space of a Variational Autoencoder (VAE) and is known to be susceptible to over-exploration, which manifests in unrealistic solutions that reduce its practicality. LES leverages the trained decoder’s approximation of the data distribution, and can be employed with any VAE decoder–including pretrained ones–without additional training, architectural changes or access to the training data. Our evaluation across five LSO benchmark tasks and twenty-two VAE models demonstrates that LES always enhances the quality of the solutions while maintaining high objective values, leading to improvements over existing solutions in most cases. We believe that new avenues to LSO will be opened by LES’ ability to identify out of distribution areas, differentiability, and computational tractability.
[ "VAE", "Latent Space Optimization", "OOD" ]
Reject
https://openreview.net/pdf?id=5MNJKgaj54
https://openreview.net/forum?id=5MNJKgaj54
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zzBvku0AhJ", "uBDYFsteZ8", "t9LgPBgVWR", "gLEXPyuBe9", "fxLHfyNkOD", "eR1NhaEySF", "eGILF5qDEE", "V3kkn7Vmmu", "RgLEfYaUEe", "QCha0JiALZ", "PDuxPWBRni", "KyTVrt4nIw", "GVR56cSFll", "DTQK8W6p1R", "CAUValW5Ot", "BxIyxBWql6", "BM08X5epiL", "9JVqsPZZ6S", "8Q5wmJsPxE", "6CjIzQ1HQB", "3pxRyLuxBv" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732158189279, 1732466893848, 1732635687411, 1737524203448, 1733250494996, 1732546514294, 1732158507724, 1732615995969, 1732328417044, 1732301663593, 1732466923345, 1730571858265, 1732158074273, 1730678350287, 1732158754114, 1732328266413, 1732773105199, 1734747823476, 1732325056309, 1732577672591, 1730460777214 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_o3Wo" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_yzw5" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_UaAz" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_yzw5" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_o3Wo" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_UaAz" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Area_Chair_Gco6" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Authors" ], [ "ICLR.cc/2025/Conference/Submission12613/Reviewer_UaAz" ] ], "structured_content_str": [ "{\"title\": \"additional results following your comments\", \"comment\": \"We thank the reviewer for their careful reading of our work and for identifying typos, which have been corrected in the revised manuscript. Indeed, \\\"ScaLES/Uncertainty\\\" describes the ratio and not another method.\\n\\n---\\n\\n### Likelihood Clarification\\n\\nFirst, we would like to clarify that the likelihood used in our approach corresponds to that of a sequence of **L categorical random variables**, each with **D categories**.\\n\\nSecond, we wish to emphasize that **ScaLES is not an estimator of p(X=x\\u2223Z=z), but of $p_{X(Z)}(X = x(z))$**. This distinction is depicted in **Fig. 2** in the original manuscript, and we have added further mention of this fact in **l.184-185**. Specifically:\\n- ScaLES does not assume a joint probabilistic model for **X** and **Z**.\\n- Instead, we consider **X** as a deterministic function of **Z**\\u2014the random variable\\u2014mapped through the decoder.\\n\\nThis is an important distinction, as many existing methods, such as the uncertainty method proposed by Notin et al., do assume that **X** follows the conditional distribution **p(X=x|Z=z)**. Consequently:\\n- **ScaLES** describes a single likelihood function **pX(Z)(X = x(z))** over the output space.\\n- **p(X=x\\u2223Z=z)** defines a distinct likelihood function for each **z**.\\n\\n---\\n\\n### Variational Inference and Likelihood\\n\\nWe would like to note that **p(X=x|Z=z)** is estimated through variational inference techniques, which are themselves approximate (e.g., training neural networks). Therefore, it holds no theoretical guarantees under any realistic assumptions. Rather, similar to ScaLES, it assumes that the decoder fits the data well, serving as a good-enough approximation to the true distribution.\\n\\n---\\n\\n### New Likelihood-Based Score\\n\\nInspired by the reviewer\\u2019s insightful comments, we have incorporated a new score into our evaluation. This score quantifies the likelihood of the most probable output for a given latent vector **z** under **p(X=x|Z=z)**, formally defined as:\\n\\n**$L(z) = max_x p(X=x|Z=z)$**.\\n\\nNotably, to the best of our knowledge, employing this score as a constraint in LSO has not been explored in prior work.\\n\\n---\\n\\n### Comparative Evaluation with Likelihood Score\\n\\nTo facilitate a direct comparison between these two scores, we reproduced our entire evaluation section (Tables 1\\u20139) to include the likelihood score. We are happy to report that our findings indicate that, in general, **ScaLES outperforms the likelihood score** as a scoring metric. Specifically:\\n- For the **average rank of the top 20 solutions**, ScaLES achieved the lowest average rank of **1.83** (lower is better) compared to **2.8** for the likelihood approach. (**l.480-481**)\\n- For the **top 1 solution**, ScaLES obtained the lowest average rank of **1.97**, outperforming the likelihood approach, which achieved an average rank of **2.93**. (**l.1024-1025**)\\n\\n---\\n\\n### Addressing Transformer Network Study\\n\\nLastly, we thank the reviewer for the suggestion to study Transformer networks. We would like to emphasize (Table 2 in the original manuscript) that we study **10 decoders with a Transformer architecture.** Our study is restricted to VAEs, as all past literature we are aware of conducted LSO using a VAE.\"}", "{\"comment\": \"Dear reviewer, did our response address your concerns and questions? If not, we would love to carry out additional experiments and/or provide further clarification.\"}", "{\"comment\": \"Thank you for your feedback, we agree that your suggestions improved the quality of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal summary\", \"comment\": \"We sincerely thank all the reviewers for their and feedback and engagement during the rebuttal period.\\n\\nTo the best of our understanding, all the reviews agree that our proposed method out-performs existing baselines in terms of LSO performance. \\n\\nThe only remaining issue is the difference between our approach and the likelihood as calculated using decoder probabilities (p(x|z)), which we addressed in detail in our response to reviewer o3Wo. We briefly summarize the main points here for convenience:\\n\\n- We (and others, e.g., [1]) have highlighted the conceptual differences between these two approaches. Specifically, a likelihood score derived through a change-of-variables is based on **different probabilistic assumptions** (e.g., X as a deterministic function of Z vs. X as a random variable conditioned on Z) compared to one calculated directly from the logits. Notably, in the context of LSO, X is typically considered the most probable sequence and is therefore treated as a deterministic function of Z. \\n\\n- Our experimental evaluation demonstrates a clear performance gap between the two approaches, with our method consistently outperforming the alternative. \\n\\n- Additionally, we conducted an experiment to show that the likelihood approach does not accurately describe the density of a random vector under the softmax transformation, highlighting the differences between the two methods (Fig. 3, l. 1057). Specifically, the likelihood score saturates, failing to capture the correct shape of the distribution. \\n\\nBuilding on the above, we believe we provide a compelling explanation of the differences between the two scores, supported by both conceptual insights and empirical evidence, which we hope will be considered.\\n\\n[1] Nalisnick, Eric, et al. \\\"Do Deep Generative Models Know What They Don't Know?.\\\" International Conference on Learning Representations. 2019\"}", "{\"comment\": \"I am really grateful for the additional experiments, and the new results are indeed quite interesting. I am also surprised that this likelihood based approach has not been examined in the prior work.\\n\\nHowever, I am still not convinced by the explanation of the difference between ScaLES and $p(x|z)$. In particular, the additional experiments show that the likelihood based approach perform good as well. Maybe it is worth to investigate a bit more into the difference.\\n\\nOverall, based on the current story of the paper and the rebuttal, I will keep my score.\"}", "{\"title\": \"clarifications and revisions\", \"comment\": \"We thank the reviewer for their careful reading of our work, and we also appreciate their suggestions for improving our manuscript. We follow the original structure of the review in our response.\\n\\n---\\n\\n### CPA Assumption\\n\\nWe thank the reviewer for their insightful comment and for highlighting the relevant reference showing that the CPA assumption is not strictly required to derive the change-of-variable formula. We appreciate the opportunity to clarify why the CPA assumption is important in our work.\\n\\n1. Our derivation in **Equation (10)** enables us to compute the gradient of ScaLES as a function of the softmax probabilities and the decoder\\u2019s derivatives (the slope matrices $A_\\\\omega$) in closed form. This approach reduces the computational burden compared to differentiating through $S(z)$ using automatic differentiation.\\n\\n2. Calculating the derivative of ScaLES without the CPA assumption, as described by Ben-Israel, would require (via Jacobi's formula) computing (denoting the decoder as $\\\\boldsymbol{G}(z)=\\\\text{Softmax}(\\\\boldsymbol{L}(z))$): \\n$\\\\frac{dS(z)}{dz_i} = \\\\text{Trace}((J\\\\boldsymbol{G}^T J\\\\boldsymbol{G})^{-1} \\\\frac{dJ\\\\boldsymbol{G}^T J\\\\boldsymbol{G}}{dz_i})$ \\n\\n The term $\\\\frac{dJ\\\\boldsymbol{G}^T J\\\\boldsymbol{G}}{dz_i}$ involves second-order derivatives of the decoder network, resulting in a significant computational burden that scales quadratically with the latent dimension in gradient evaluations. However, representing the decoder as a CPA implies that the second-order derivative of the logits matrix $\\\\boldsymbol{L}$ is always zero, which alleviates the need to compute these derivatives.\\n\\n3. Lastly, the CPA assumption provides intuition by partitioning the output space into regions, each associated with a distinct contraction or dilation, affecting the likelihood of it's image\\n\\nThe reviewer is correct that this important clarification was missing in the original manuscript. We have revised the manuscript to address this point (**l.264\\u2013269**).\\n\\n---\\n\\n### On Scalability\\n\\nWe appreciate the reviewer\\u2019s suggestion to clarify the term \\\"scalable,\\\" and we agree with their assessment. The reviewer is correct that while the calculation of the determinant may not constitute a computational burden in the VAEs we study (which are comparable in size to VAEs used in real applications, e.g., **[1]**), it will not scale gracefully to much larger VAEs. Reflecting on the reviewer\\u2019s comments, we have:\\n- Removed the term \\\"scalable\\\" from the manuscript.\\n- Emphasized the computational complexity of our method in the contributions (**l. 92**).\\n\\n**[1]** Truong, G.B., et al, 2024. Discovery of Vascular Endothelial Growth Factor Receptor 2 Inhibitors Employing Junction Tree Variational Autoencoder with Bayesian Optimization and Gradient Ascent. ACS Omega.\\n\\n---\\n\\n### Validity of ScaLES Heuristic\\n\\nWe fully agree with the reviewer\\u2019s observation that the success of ScaLES is closely linked to the performance of the decoder. This is a point we emphasized in **lines 203\\u2013204**. In the revised manuscript, we:\\n- Removed the term \\\"general-purpose.\\\"\\n- Included a more detailed discussion of this under the limitations of Theorem 5 (**l.284\\u2013286**). \\n\\nPlease also see our response to R3 for a discussion regarding the **'Interest in OOD Data Points'**.\\n\\n---\\n\\n### Overclaims in Writing\\n\\nWe thank the reviewer for raising concerns about our claims. In response, we have removed the terms **exact, general-purpose, theoretically motivated,** and **scalable** from the manuscript.\\n\\n---\\n\\n### Baseline Method Descriptions\\n\\nWe have added a description of the Bayesian uncertainty score in **Appendix D (l.1045\\u20131052).**\\n\\n---\\n\\n### Questions\\n\\n1. **What is the purpose of the CPA approximation?**\\n\\n As explained in our response above, the CPA approximation:\\n - (1) Enables closed-form expressions for the gradient of ScaLES.\\n - (2) Avoids the need to compute the Hessian of the decoder.\\n - (3) Provides a geometric interpretation of the decoder function as a map that partitions the latent space into regions, before applying the softmax function.\\n\\n2. **What is the meaning of \\\"scalability\\\" in reference to ScaLES?**\\n\\n We have removed the term \\\"scalable\\\" from our manuscript.\\n\\n3. **Is the latent dimension of the generative models considered in this work similar to what one would expect to use in more practical settings for LSO?**\\n\\n To the best of our knowledge, the VAE models considered in this study have latent dimensions comparable to those used in practical applications. We have provided a citation backing up this claim in our response above (**On Scalability**).\\n\\n4. **Lastly, I am curious if the authors have considered alternative instantiations of the ScaLES heuristic?**\\n\\n The suggestion of using normalizing flows as an alternative is intriguing and certainly worth further exploration. In this study, we focus on methods that do not require additional training and can therefore be **easily integrated into existing pipelines (l.85-86)**.\"}", "{\"comment\": \"I thank the authors for their reply and their effort in implementing the changes according to the feedback given. I think these changes have improved the quality of the paper. I am hence raising my score to a 6, thus recommending acceptance of the paper. I am also increasing my score for the paper presentation to a 3: good.\"}", "{\"title\": \"thank you!\", \"comment\": \"much appreciated :)\"}", "{\"comment\": \"I thank the reviewers for their clarification and paper updates.\\n\\n*Re OOD detection:* The issue of OOD detection is not specific to normalising flows and was also observed in VAEs by Nalisnick et al. In that sense, I believe that my comment is related to the issue of heuristic validity raised by Reviewer yzw5. Based on Nalisnick et al., it is unclear whether OOD samples will always have a lower likelihood, and, as suggested by Reviewer yzw5, OOD samples may still be of interest in some configuration. Because if this, is it possible that some improvements observed with LES are due to a (spuriously) high likelihood of OOD samples ?\\n\\nGiven this and taking into account the observations of other revieers, I am lowering my score to 6 for now.\"}", "{\"comment\": \"Dear reviewer, did our response address your concerns and questions? If not, we would love to carry out additional experiments and/or provide further clarification.\"}", "{\"summary\": \"This paper proposes a method for latent space optimization called ScaLES. The author's highlight that existing methods for latent space optimization often yield latents which correspond to invalid configurations in the observation space. To focus on latents within the \\\"valid set\\\", the authors propose to use the likelihood of a sample under the learned generative model, specifically using the decoder contribution to the likelihood. The authors show that incorporating this score into latent space optimization can be done in a computationally tractable manner and yields superior performance to existing methods on benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper address an important problem, i.e., finding effective methods for latent space optimization.\", \"The paper is straightforward to understand and well written.\", \"The paper does a very good job motivating the need for their method and contextualizing it relative to prior work.\", \"The method seems simple and straightforward to implement.\", \"The method shows promising performance in terms of identifying valid latent configurations yielding performance gains over existing methods.\"], \"weaknesses\": \"**CPA Assumption**\\n\\nA key aspect of the author's method is the assumption that all reasonable deep generative models can be well approximated by continuous piecewise affine (CPA) spline operators. I am confused exactly what the purpose is of making this approximation? If the sole purpose is to use this approximation to derive the likelihood of a sample under the generative model via Theorem 5, then the assumption seems unnecessary as such a formula exist via the well known change of variable formula or alternatively its extension to rectangular matrices [1]. Thus, unless I am missing something, the CPA assumption and Theorem 5, which constitute a large part of the method section, seem unnecessary.\\n\\n**On Scalability**\\n\\nThe authors emphasize \\\"scalability\\\" as a key strength of the Scales method. Based on my understanding of the method, however, \\\"scalability\\\" does not seem to be the method's strong point. Scales requires computing the determinant of the Jacobian of a decoder. In the experiments the authors consider, the latent dimension is quite low, s.t. this computation can be done relatively efficiently, as shown in Table 1. It is known, however, that the computation of the log determinant of the decoder Jacobian scales cubicly with the latent dimension (as the authors note). I am not an expert in LSO, thus I do not know if most problems of interest require a significantly larger latent dimensionality. However, for such problems, the Scales method will not scale gracefully. Thus, I do not view \\\"scalability\\\" as the strong point of this method.\\n\\n**Presentation**\\n\\nI think the clarity of the presentation of the contributions and main ideas in this work can be improved. As far as I understand, the main contributions of this work are the following:\\n1. The authors propose the idea that the likelihood of a generative model can be used as a heuristic for determining whether a given latent will yield a valid observation.\\n2. The authors create a score for LSO based on this idea, using only the generator term in the likelihood formula.\\n3. The authors show that this score can be efficiently computed on the datasets considered and that it is able to yield latents which correspond to valid configurations more robustly than existing methods. \\n\\nCurrently, I feel that the abstract and introduction are rather vague in terms of describing the core ideas and contribution of this work. To this end, I think a more direct description, along the lines of what is written above, would be easy to write and would improve the clarity of the paper.\\n\\n**Validity of Scales Heuristic**\\n\\nI am skeptical of the whether the likelihood of a generated latent under the decoder will always be a good heuristic for whether this latent will yield a valid observed configuration. For example, many configurations of interest are very likely OOD w.r.t. the empirical data distribution, i.e., a newly discovered molecule may have zero coverage under the empirical data distribution. Consequently, whether Scales is able to identify such a latent is contingent on whether or not the decoder is able to extrapolate to these unseen regions. In other words, the success of Scales seems intimately tied to the performance of the decoders considered. In this sense, I think calling the method a \\\"general purpose regularization method for LSO\\\" is a bit of an overclaim. I think a deeper discussion on this point, i.e., the validity of the Scales heuristic and its relationship to the generalization of the models in question, would improve the paper.\\n\\n**Overclaims in Writing**\\n\\nI feel that some of the writing, when motivating Scales feels like a bit of an overclaim that should be toned down or at least better expounded upon. For example, the authors motivate Scales as a \\\"exact and theoretically motivated method\\\" or \\\"implemented ... exactly as theoretically derived\\\". In what sense is the method \\\"exact\\\" and in what sense is it \\\"theoretically motivated\\\" or \\\"derived\\\"? The latter makes it sounds like there exist a theorem on the optimality of the method, when, in reality, I presume the authors are referring to their derivation of the likelihood (Thm 5). I do not think this constitutes theoretical motivation for the method and feels like an overclaim which makes the method sound more principled than it actually is, as I understand it.\\n\\nFurther, as previously discussed, I do not think calling the method \\\"general purpose\\\" or \\\"scalable\\\" is particularly accurate, and I would encourage the authors to explain what is meant by this, if they include this language.\\n\\n**Baseline Method Descriptions**\\n\\nA key baseline method that the authors compare against is the \\\"Bayesian uncertainty score\\\". The authors, however, do not give a self-contained description of this method in the main text or appendix which makes it difficult to contextualize some of the experimental findings. \\n\\n**Summary**\\n\\nIn summary, I think the core idea of this paper, i.e., that the likelihood of a generative model can serve as a good heuristic for identifying valid latents for LSO, is potentially interesting and the author's empirical results seem promising. Currently, however, I feel the manuscript obfuscates this relatively straightforward idea with superfluous theory and vague writing. Furthermore, I think several stated motivations of Scales, in particular scalability, feel like overclaims to me, for the reasons stated above.\\n\\n**Bibliography**\\n\\n1. http://benisrael.net/INTEGRAL-AMS.pdf\", \"questions\": \"**1.** What is the purpose of the CPA approximation?\\n\\n**2.** What is the meaning of \\\"scalability\\\" in reference to Scales?\\n\\n**3.** Is the latent dimension of the generative models considered in this work similar to what one would expect to use in more practical settings for LSO.\\n\\n**4.** Lastly, I am curious if the authors have considered alternative instantiations of the Scales heuristic?\\n\\nSpecifically, a core idea of Scales, as I understand it, is that the generator likelihood, can serve as a good heuristic for the validity of latents in LSO. This likelihood consist of the prior likelihood term and a Jacobian determinant. Currently, the authors rely on the determinant term. I am curious if the prior likelihood could instead be used in some cases. For example, consider a case in which one does not force the latents to conform to a restricted prior such as a unit Gaussian as in a VAE, but instead just encodes data with a vanilla autoencoder. In this case, the latent distribution will have a complex shape which may better capture the overall likelihood of a generated latent. To measure this likelihood, one could model the unknown latent distribution via an exact likelihood method such as a normalizing flow. I am curious if the authors have intuition on whether the prior likelihood on its own, in such cases, could serve as an effective score for LSO. I ask this particularly because using the prior likelihood seems much more scalable than the Jacobian determinant, thus I am wondering if there are cases where this score could be effective.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"answers to questions\", \"comment\": \"We thank the reviewer for their thorough reading of our work and for the positive feedback. We appreciate the reviewer\\u2019s suggestions on improving the mathematical notation, all of which have been incorporated in blue in the revised manuscript. Additionally, we have removed the claim regarding the absence of hyperparameters.\\n\\n---\\n\\n### Regarding the Questions:\\n\\n1. **On likelihood base models and OOD Detection**: \\n Recent work **[1]** has shown that, for normalizing flows, the number of singular values of the Jacobian matrix exceeding a certain threshold can help address the OOD paradox (i.e., spuriously assigning a high likelihood to OOD data samples). We believe it is a promising direction to investigate whether ScaLES could also be used in this context to improve OOD detection for likelihood-based models.\\n\\n2. **On the Expression Dataset and using larger $\\\\lambda$**: \\n For the expression dataset, the (black box) problem is easier compared with the other benchmarks, which is why we think the regularization is less helpful. We believe that the gradient of the acquisition function is not as noisy as in the other datasets, and there are fewer gains in adding regularization. We note that in general we did not perform a through grid search to find the optimal $\\\\lambda$ for each problem and it is possible that for some cases using $\\\\lambda>5$ would be beneficial.\\n\\n---\\n\\n**[1]** Kamkari et al., \\u201cA Geometric Explanation of the Likelihood OOD Detection Paradox,\\u201d ICML 2024.\"}", "{\"summary\": \"The paper proposed a sequence density function for discrete sequential data $x$ in a latent variable model setting, i.e., VAEs. The authors further demonstrated the effectiveness of this density function in detecting out-of-distribution $x$, as well as in mitigating over-exploration during latent space optimization (LSO).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors showed that the proposed method is more compute efficient and outperforms the baselines in LSO.\", \"The question of LSO for sequential data is important, and using VAEs to solve it is interesting.\"], \"weaknesses\": \"VAEs are generative models, i.e., modelling $p(x,z)$ with the pre-defined prior $p(z)$ and the learned likelihood $p(x|z)$ (the decoder). When we train a VAE, we need to choose the class of model for $p(x|z)$ that correctly describes the data type of $x$. For example, in the case of RGB images, people use discrete mixture of logistics to model the discrete pixel value range from 0 to 255.\\n\\nFrom the paper, it is not mentioned what likelihood models are used in the decoder for such discrete sequential data type (maybe I missed some parts).\\n\\nIf I understand correctly, the authors proposed an estimator (denoted as $S(z)$) for $p(x|z)$. The part I am not sure about is: since we already have a free $p(x|z)$ from the VAE, can we simply use $p(x|z)$ to evaluate the density? What are the benefits on using ScaLES instead?\", \"questions\": \"1. Table 1, what is column \\u2018ScaLES/Uncertainty\\u2019? Is it the ratio or another method?\\n2. Typos:\\n 1. Page 2 line 097, \\u201cout-out-distribution\\u201d, \\u2014> \\u201cout-of-distribution\\u201d\\n 2. Page 8 line 396, \\u201csamples are decoded into the latent space\\u201d?\\n3. Other than VAEs, are there any method using other sequence generative models (e.g., Transformers) for this task?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response\", \"comment\": \"We would like to thank all the reviewers for their time and careful reading of our work. We are delighted that the reviewers agree on the need to mitigate over-exploration in LSO and on the ability of our proposed method to help alleviate that limitation.\\n\\nWe now summarize the main points raised by the reviewers and the consequential changes to our manuscript.\\n\\n---\\n\\n### Changes to the Method Name and Title\\n\\nFollowing **R2\\u2019s suggestions** to clarify the exact meaning of the term \\u201cscalable,\\u201d we have revised the name of the method to **Latent Exploration Score (LES)** (we will use ScaLES and LES in our responses interchangeably) and the title of the paper to **\\u201cMitigating over-exploration in latent space optimization using LES.\\u201d** This change reflects the fact that the determinant calculation does not scale gracefully to the latent dimensions of VAEs. While most existing VAEs in LSO have manageable latent dimensions, this may not be the case for other modalities or tasks. We hope this change will prevent any confusion for the readers.\\n\\n---\\n\\n### The Difference Between LES and p(X=x|Z=z)\\n\\n\\n**R1** asked us to clarify whether LES serves as an estimator of p(X=x|Z=z), and if so, why we wouldn\\u2019t directly use p(X=x|Z=z) (i.e., the probability of the generated output given the latent vector). In response, we have revised the manuscript (l.184-185) to emphasize that LES estimates $p_{X(Z)}(X=x(z))$, assuming that **x** (which, in this case, is the entire probabilities' matrix as defined in eq. 9, l.215) is a deterministic function of **z**, rather than a random variable that follows a conditional distribution.\\n\\nMotivated by this comment, we explored the use of $L(z) = max_x p(X=x\\u2223Z=z)$ as a regularization method for LSO, which we refer to as the **likelihood baseline.** This metric captures the probability of the most likely output given **z**, aligning with how sequences are typically generated in LSO. To the best of our knowledge, this approach has not been examined in prior work. As part of this investigation, we reproduced all the results from the original manuscript, incorporating this new baseline (**Tables 1\\u20139 in the revised manuscript have been updated accordingly**). \\n\\nOur results demonstrate that, while the likelihood score serves as a strong baseline, LES outperforms it. Specifically:\\n- For the **average rank of the top 20 solutions**, ScaLES achieved the lowest average rank of **1.83** (lower is better) compared to **2.8** for the likelihood approach. (**l.480-481**)\\n- For the **top 1 solution**, ScaLES obtained the lowest average rank of **1.97**, outperforming the likelihood approach, which achieved an average rank of **2.93**. (**l.1024-1025**)\\n---\\n\\n### Why the CPA Assumption is Needed\\n\\n**R2** sought clarification on the necessity of assuming that the decoder can be represented as a CPA, noting that the change-of-variables term can be computed without this assumption. We addressed this (l.264-269) by explaining that one can calculate LES without the CPA assumption, but this assumption is essential for several reasons:\\n1. It enables a closed-form expression for the derivative of LES.\\n2. It avoids the computationally expensive process of calculating the decoder's Hessian when computing the derivative of LES.\\n3. It provides a geometric perspective on how the decoder function contracts or dilates the latent space.\\n\\n---\\n\\n### Over-Claiming in Writing\\n\\nIn response to suggestions from **R2** and **R3**, we have revised the manuscript to remove the terms: **\\u201cexact,\\u201d \\u201cno hyperparameters,\\u201d \\u201ctheoretically motivated,\\u201d \\u201cgeneral-purpose,\\u201d** and **\\u201cscalable.\\u201d** We hope these changes better reflect our proposed methodology and its merits.\"}", "{\"comment\": \"I thank the authors for their detailed explanation and have updated my score accordingly.\"}", "{\"comment\": \"Dear reviewer, did our response help in clarifying the difference?\"}", "{\"metareview\": \"This paper seeks to address the overexploration problem that exists in latent space optimization, where sufficiently powerful optimizers can easily induce a VAE to produce highly \\\"out of distribution\\\" samples (e.g., unrealistic molecules) in order to achieve high scores. The authors' score essentially involves scoring molecules based on their density in the latent space *after* the decoder transformation, computed via change-of-variables. For strengths, I think the approach is fairly clever, obviously moves a step beyond simple likelihood estimation, and seems to achieve reasonable results.\\n\\nHowever, the experimental setup is very vague in a way that makes Table 8 extremely hard to interpret, and potentially makes LES look significantly better. To be completely frank, I think some of these unanswered questions alone mean that the paper absolutely needs another round of review with additional clarity. I detail my concerns extensively below.\\n\\n1. Why are we reporting the average of top 20 solutions in the main text, with the top 1 solution only appearing in the supplementary materials? This is an extremely non-standard thing to report when the methods involved don't explicitly try to optimize for top 20 performance. While some methods exist in the literature in LSO that do this (e.g., Maus et al., 2023), diversity constrained optimization is not what's being done here, and this is never explained as far as I can tell.\\n\\n2. [Read the whole point here before you think I've misunderstood the purpose of the paper.] Some of the baselines in the paper do not achieve anywhere near previously reported experimental result values even when looking at the top 1 table in the appendix (e.g., TuRBO with SELFIES 25 achieves [0.49, 0.31, 0.49] vs [>0.75, >0.90, >0.65] in Maus et al., 2022. Now, there are two possible completely reasonable explanations for this:\\n - Possible explanation one: proposals made by TuRBO are being filtered out via the `rd_filters` check, and the method is now failing because it is oblivious to the authors' (reasonable) proxy definition for a \\\"valid\\\" molecule. This would be the best possible explanation, because it means that the authors method is working very well!\\n - Possible explanation two: the methods are not being run to convergence, or for an insufficient evaluation budget. This would be less good: even if the baselines stumble due to the validity checks, if they *eventually* produce solutions that pass these checks the story needs to now be about optimization *efficiency*, but optimization performance as a function of time is not evaluated anywhere in the paper.\\n\\nThe crux of the problem is that it's extremely unclear from the paper which of the above two factors is dominating here. \\n\\nFor bullet point 2-(1), the only real ablation of the impact of filtering is the \\\"fraction valid\\\" table in the appendix (Table 10), which (a) are not convincingly better on *every* task we see much worse performance of the validity unaware methods, and (b) don't tell the whole story. Assuming the authors are doing the reasonable thing and \\\"invalid\\\" solutions (e.g., as measured by `rd_filter`) are thrown out post hoc after the run has concluded (again this should be detailed), what we need to see here is **a comparison between the unfiltered scores and the filtered scores**. A large gap here would convincingly indicate that yes, the authors' implementation of these baselines is working as intended, they just aren't producing *valid* solutions.\\n\\nFor point 2-(2), at a *minimum* the evaluation budget needs to be listed. Much better, and arguably solving both parts of concern 2, would be to simply plot both unfiltered and filtered top 1 score as a function of the evaluation budget. Such a plot would let us see (1) yes the unfiltered versions of the algorithms eventually achieve their expected performance, but (2) as soon as you filter performance levels off much lower because these methods are validity oblivious. Or, alternatively, perhaps it's the case that the unfiltered optimizers do eventually achieve good filtered scores, they just do so much slower. The results currently in the paper might indicate this is not likely to be the case, but that really depends on the evaluation budget used here.\\n\\nI'm almost at character limit, so I'll just mention here that I don't mean the above lengthy points to be overwhelming criticism, I actually really quite like the method in this paper, and think it's much more clever than e.g. simple likelihood scoring or similar. However, the way the results are presented to the reader make it *very* hard to evaluate a few extremely key aspects of the work in a way that makes me think the entire experimental presentation simply needs to be redone.\", \"additional_comments_on_reviewer_discussion\": \"I think the authors largely addressed most of the reviewer concerns here, and were quite detailed in their updates. Overall, I think the paper is quite close to the bar even with my additional concerns above, but my own concerns combined with a few still slightly hesitant reviewers just didn't push me over the edge here.\"}", "{\"title\": \"thanks for your response\", \"comment\": \"We thank the reviewer for their thoughtful comments.\\n\\n## 1. Likelihood and OOD Data\\nWe would like to clarify that we did not claim OOD regions will **always** have lower density, but rather that OOD regions are **more likely** to have lower density\\u2014a claim quantitatively supported in Table 2. As noted in our original submission, likelihood consistently shows a strong correlation with being in-distribution, defined by decoding into valid objects. This is evidenced by the high AUROC (**0.92 on average and always above 0.75**) values observed across the various decoders we analyzed. \\n\\nIt is also important to note that Nalisnick et al.'s analysis focused on images, which involve a **different data modality** (continuous images versus discrete sequences) and applied a **different definition of OOD**, specifically based on images from different datasets rather than latent vectors and the concept of validity. A possible explanation for the differences between our findings and those of Nalisnick et al. is that our definition of validity more effectively distinguishes in-distribution data from out-of-distribution data compared to the approach used in their analysis of images from different datasets.\\n\\n---\\n\\n## 2. Interest in OOD Data Points\\nWhile it is true that OOD data points can sometimes be of interest, the key issue is not their potential value but whether there exists a practical algorithm to systematically and efficiently identify such points. For LSO, our analysis (i.e., the poor performance of *LSO (L-BFGS)* method) and previous work ([1] and [2]) suggest that no such algorithm currently exists. To substantiate this claim, we include below relevant excerpts from the cited literature in our introduction:\\n\\n> \\u201cAlthough in principle optimization can be performed over all of Z, it has been **widely observed** that optimizing outside of the feasible region tends to give **poor results**, yielding samples that are low-quality, or even invalid (e.g., invalid molecular strings, non-grammatical sentences); therefore, **all LSO methods known to us employ some sort of measure to restrict the optimization** to near or within the feasible region.\\u201d[1]\\n\\n> \\u201cAutomatic Chemical Design possesses a deficiency in so far as it **fails to generate a high proportion of valid molecular structures**. The authors hypothesize that molecules selected by Bayesian optimization lie in \\u2018dead regions\\u2019 of the latent space **far away from any data that the VAE has seen in training, yielding invalid structures when decoded.**\\u201d[2]\\n\\nWe believe these examples, drawn from **highly cited papers written by leading researchers in the field**, provide compelling evidence that while OOD data points might occasionally be valuable, there are no practical, systematic methods to reliably identify them. We will be happy to further emphasize this point in the revised manuscript, to make it extra clear.\\n\\n---\\n\\n## 3. Empirical Improvements and OOD Regions\\nFinally, regarding the question of whether the empirical improvements we observe might stem from exploring OOD regions: we do not believe this to be the case. The following points support our reasoning:\\n\\n- The analysis in Table 2 demonstrates that LES **consistently achieves higher values** in regions of the latent space that decode into valid objects, which we and others ([1] and [2]) define as in-distribution.\\n- Across 30 experiments, the proportion of valid solutions generated during optimization with LES is **always higher** than without LES. (Table 10, LES is always higher than LSO (GA))\\n\\nGiven these observations, we find it **highly unlikely** that LES explores OOD regions, as these would typically not decode into valid objects (per [1] and [2]).\\n\\n---\\n\\n### References\\n[1] Tripp, A., Daxberger, E. and Hern\\u00e1ndez-Lobato, J.M., 2020. Sample-efficient optimization in the latent space of deep generative models via weighted retraining. *Advances in Neural Information Processing Systems*, 33, pp.11259-11272.\\n\\n[2] Griffiths, R.R. and Hern\\u00e1ndez-Lobato, J.M., 2020. Constrained Bayesian optimization for automatic chemical design using variational autoencoders. *Chemical Science*, 11(2), pp.577-586.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful critique of our work. We are pleased to note that the reviewer **does not dispute our conclusion that LES outperforms the likelihood score**, as supported by the results presented in Tables 2, 3, 8, and 10. Below, we summarize these findings:\\n\\n- **Table 2**: LES outperforms the likelihood score in identifying OOD data points, achieving an average AUROC of 0.93 compared to 0.91 for the likelihood score (l.377).\\n\\n- **Table 3**: LES regularization demonstrates superior optimization results for the top 20 solutions:\\n - **Average ranking** (lower is better): LES achieves a value of 1.83, compared to 2.8 for the likelihood score.\\n - **Number of times a solution is within 1 standard deviation of the best solution** (higher is better): LES achieves 21 instances, compared to 14 for the likelihood score.\\n\\n- **Table 8**: LES regularization improves optimization results for the best solution found:\\n - **Average ranking** (lower is better): LES achieves 1.97, compared to 2.93 for the likelihood score.\\n - **Number of times a solution is within 1 standard deviation of the best solution** (higher is better): LES achieves 18 instances, compared to 16 for the likelihood score.\\n\\n- **Table 10**: LES regularization delivers a higher percentage of valid solutions across all tasks, with an average of 0.61 compared to 0.58 for the likelihood score.\\n\\nWe would like to emphasize that our manuscript explicitly recognizes the likelihood score as **\\u201ca more computationally efficient alternative, which comes with some performance trade-offs\\u201d (lines 308-309)**.\\n\\nWe also appreciate that the reviewer does not dispute the **conceptual differences** between the two approaches, as highlighted in both our previous response and the work of Nalisnick et al., cited by Reviewer 3. To reinforce this point, we include relevant excerpts from Nalisnick et al.:\\n\\n> \\u201cThe VAE and many other generative models are defined as a joint distribution between the observed and latent variables. However, another path forward is to perform a change of variables. In this case x and z are one and the same, and there is no longer any notion of a product space X \\u00d7 Z.\\u201d \\n\\n> \\u201cThe change of variables formula is a powerful tool for generative modeling as it allows us to define a distribution p(x) entirely in terms of an auxiliary distribution p(z), which we are free to choose, and f.\\u201d\\n\\nBeyond the **quantitative and conceptual differences highlighted above**, we conducted an additional experiment to further demonstrate the advantages of LES in recovering the true density.\\n\\nIn this experiment, we consider a d-dimensional (d=25,56,75 and 256 to match the latent space dimensions in our study) Gaussian vector (z) transformed into a vector of \\u201cprobabilities\\u201d using the softmax transformation. For each data point, we calculated LES and the likelihood score, along with the **true** density of X under the softmax transformation. To visualize the differences, we sampled evenly spaced data points between -20 and 20 along the first dimension of z, while sampling the other dimensions from a Gaussian distribution with a standard deviation of 0.1. These results, presented in Appendix C1 of the revised manuscript, clearly demonstrate that LES provides a more accurate estimate of the true density of X. In contrast, the likelihood score fails to capture the true density's correct structure. \\n\\nTo summarize, we have demonstrated the following differences between LES and the likelihood:\\n1. **Mathematical difference**: LES and the likelihood score are based on different formulas.\\n2. **Conceptual difference**: The two approaches make distinct probabilistic assumptions.\\n3. **Large differences in density estimation**: LES outperforms the likelihood score in a toy model setting.\\n4. **Performance difference in LSO**: LES demonstrates superior results when applied to LSO.\\n\\n[1] Nalisnick, Eric, et al. \\\"Do Deep Generative Models Know What They Don't Know?.\\\" International Conference on Learning Representations. 2019\"}", "{\"summary\": \"Latent Space Optimisation (LSO) is prone to over-exploration, leading to invalid solutions. This paper proposes to regularise the acquisition function $\\\\\\\\mathcal{A}_{\\\\\\\\hat{f}}(\\\\\\\\mathbf{z})$ to restrict the selected query points $\\\\\\\\mathbf{z}^{(new)}$ to a subset leading to valid output sequences. This is done via the proposed score, ScaLES, which approximates the log-likelihood of valid samples through the log-determinant of the decoder's Jacobian (pseudo) inverse. This score is shown to be a good proxy measure both theoretically and empirically. Furthermore, ScaLES also demonstrates SOTA computational time compared to existing methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Quality\\n======\\n- The paper is theoretically sound; intuitive explanations and well-detailed derivations are provided in both the main paper and the Appendix. I carefully checked the derivations and did not find any issues. \\n- The experiments provided are extensive, using several architectures, datasets, and hyperparameters. Additional ablation results are also provided. All this shows that ScaLES generalises well across various configurations.\\n- The limitations of ScaLES are discussed, and the impact of some theoretical assumptions in practical applications is well-detailed.\\n\\nClarity\\n=====\\n- The paper is well-written and easy to follow.\\n- As far as I know, the contextualisation with respect to related work is done adequately. Still, LSO is not my area of research, so I may not notice missing related work.\\n\\nOriginality\\n========\\n- As far as I know, the proposed method is novel. However, as I mentioned above, LSO is not my area of research, so I may not be aware of concurrent work close to the proposed method.\\n\\nSignificance\\n=========\\n- ScaLES is well-motivated theoretically and shows SOTA results across extensive experiments. This score can be of interest for the LSO community, both on the theoretical and practical side.\", \"weaknesses\": [\"I really enjoyed reading this paper, and I only have a few minor comments.\", \"Mathematical notation\", \"=================\", \"In Alg. (1), the new samples are generally referred to as $\\\\\\\\cdot^{(new)}$, except $y$, which is referred to as $y^{new}$. I suggest renaming it $y^{(new)}$ for consistency.\", \"In Eq. (6), $L$ is not defined.\", \"l. 212, the vocabulary size and sequence length are denoted in two ways: D, L and $D, L$. Which notation is correct?\", \"In Eq. (19), shouldn't $\\\\\\\\mathbf{x}$ be $\\\\\\\\mathbf{x}_{\\\\\\\\mathbf{z}}$ as in Eq. (9), or did I miss something there?\", \"l. 648, could the numerator of $p_{\\\\\\\\mathbf{z}}^{(i)}$ be rewritten with $\\\\\\\\exp(\\\\cdot)$ instead of $e^{\\\\cdot}$? the font size of the fraction is quite small, and it is hard to distinguish which subscripts go where.\", \"Clarity\", \"=====\", \"Are the results in Fig. 9 an average over all the datasets? If so, could we also have the details per dataset (especially for the challenging Expressions dataset)? If it is only over one dataset, could the dataset used be mentioned in the legend?\", \"In Sec.5, given the need to select a good value for $\\\\\\\\lambda$, I think the claim that \\\"ScaLES has no hyperparameter\\\" could be misleading. I would encourage the authors to reformulate this and mention the need for the $\\\\lambda$ parameter to weight the score given by ScaLES.\", \"Typo\", \"====\", \"Appendix A is empty, consider removing it if not used.\"], \"questions\": \"- [1] observed that almost all types of probabilistic generative models can spuriously assign a high likelihood to OOD data samples. How would this affect ScaLES?\\n- In 4.2, the authors mentionned that $ \\\\\\\\lambda > 0.5$ did not improve the optimisation process. Are the results worse in that case? If so, why?\\n- Can the authors provide some insights into why the Expressions dataset was more challenging for ScaLES? did $\\\\\\\\lambda > 0.05$ provided worse results or just no improvements?\\n\\nReferences\\n=========\\n[1] Nalisnick, Eric, et al. \\\"Do Deep Generative Models Know What They Don't Know?.\\\" International Conference on Learning Representations. 2019\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5MBUmj5mTI
On the Influence of Shape, Texture and Color for Learning Semantic Segmentation
[ "Annika Mütze", "Natalie Grabowsky", "Edgar Heinert", "Matthias Rottmann", "Hanno Gottschalk" ]
In recent years, a body of works has emerged, studying shape and texture biases of off-the-shelf pre-trained deep neural networks (DNN) for image classification. These works study how much a trained DNN relies on image cues, predominantly shape and texture. In this work, we switch the perspective, posing the following questions: What can a DNN learn from each of the image cues, i.e., shape, texture and color, respectively? How much does each cue influence the learning success? And what are the synergy effects between different cues? Studying these questions sheds light upon cue influences on learning and thus the learning capabilities of DNNs. We study these questions on semantic segmentation which allows us to address our questions on pixel level. To conduct this study, we develop a generic procedure to decompose a given dataset into multiple ones, each of them only containing either a single cue or a chosen mixture. This framework is then applied to two real-world datasets, Cityscapes and PASCAL Context, and a synthetic data set based on the CARLA simulator. We learn the given semantic segmentation task from these cue datasets, creating cue experts. Early fusion of cues is performed by constructing appropriate datasets. This is complemented by a late fusion of experts which allows us to study cue influence location-dependent on pixel level. Our study on three datasets reveals that neither texture nor shape clearly dominate the learning success, however a combination of shape and color but without texture achieves surprisingly strong results. Our findings hold for convolutional and transformer backbones. In particular, qualitatively there is almost no difference in how both of the architecture types extract information from the different cues.
[ "cue influence", "shape", "texture", "semantic segmentation", "bias", "convolutional neural network (CNN)", "transformer" ]
Reject
https://openreview.net/pdf?id=5MBUmj5mTI
https://openreview.net/forum?id=5MBUmj5mTI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pkZ8lsnIo9", "iaak2euo66", "ZsdM8eTjBt", "XjSGI9d53P", "Ut4PMW94Hs", "UPAx8sVkaA", "SB9rsoEymb", "NeFZFZcvxf", "NC5YVfcLmr", "MVP0HcFopD", "HwFcsJhMUS", "CmwKpd8szb", "83ZDj7670F", "6tr6zZUCPd", "3Xae1mwCzG", "2GpEknDpBL", "1PKrrZYcNX", "0Zvy0wayi9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732621660424, 1732484977818, 1732483686770, 1737524067992, 1730554687511, 1732651974328, 1732484730230, 1730670288043, 1730215669307, 1734955665793, 1732484598093, 1732643730596, 1732966910438, 1729797127133, 1732630267769, 1732485019372, 1732483770131, 1732484233117 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_pTje" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_pTje" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_bPD6" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_bPD6" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_o1oW" ], [ "ICLR.cc/2025/Conference/Submission10650/Area_Chair_xPUU" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_o1oW" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_j3xz" ], [ "ICLR.cc/2025/Conference/Submission10650/Reviewer_j3xz" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ], [ "ICLR.cc/2025/Conference/Submission10650/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks to the author for responding to the question, I will keep my vote score.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"First, we would like to thank the reviewer for the constructive and positive feedback. We sincerely appreciate the recognition of the importance of the problem we address with \\\"a good number of experiments across different datasets and architectures\\\".\\nIn the following, we discuss the raised concerns which helped us refine the paper. All adaptations we made are highlighted in violet in the revised paper.\\n\\n**[W1] unclear results when looking at the rank change**\\n\\n*Response:*\\nLearning semantic segmentation is a difficult task which is still not fully understood. In our paper, we propose a method which gives insight into the cue influence down to pixel level for the first time. \\nHowever, the unique characteristics of each dataset result in differences in how much information the cues contribute to the learning task.\\nThe rank change column is supposed to abstract from the numeric results that suffer from variance. Rank changes (except for the synthetic dataset) mainly occur when the numeric results are very close to each other and often can be explained by the variance. For the synthetic data the rank changes are more pronounced due to the rendering characteristics (see also our response to [W2, Q2] of reviewer pTje).\\nNonetheless, it is noteworthy that a consistent and intuitive but not yet proven trend emerged across multiple datasets which ranks color, texture and shape cues and texture-shape combinations. Furthermore, we calculated the rank correlation across different datasets. The Pearson correlation coefficient of 0.934 for the Cityscapes and the CARLA cue ranking denotes a high correlation. We adapted the formulation in paragraph \\\"Cue Influence on Mean Segmentation Performance\\\" in the paper to break down the arguments more precisely.\\n\\n\\n**[W2] The claimed results are not surprising, and I do not see a clear impact or the direction of this paper. Furthermore, is not surprising that using all cues is better.**\\n\\n*Response:* \\nWe acknowledge that the findings may appear intuitive; however, we would like to highlight that this is the first study in semantic segmentation to experimentally validate these intuitions across diverse depth levels, down to the pixel level and across different architectures. We believe this contribution is particularly valuable, as also noted by reviewer o1oW, since intuitions can sometimes prove incorrect or only partially true, as seen with the shape bias hypothesis for CNNs in classification (see Geirhos et al.).\\nThe same study shows, using all cues of ImageNet at the same time can lead to a bias. If a certain bias is helpful in solving the task it would be intuitive that this cue encodes more information than other cues. However, we find it interesting, that from an information retrieval perspective neither shape nor texture alone clearly dominates the learning success.\\nFrom our perspective, proving these not surprising aspects has an impact on the deeper understanding of the learning process of DNNs for semantic segmentation. We would like to also draw attention to the aspect that we propose a general method which allows to decompose any custom dataset. This allows not only to analyze the specific cue influences but also to use the decomposed data to, e.g., study biases of pre-trained DNNs.\\n\\n**[W3] and [Q3] One final major concern is about the validity of some experiments. In particular, on how evaluation is done for the color cue expert. [Is the capacity or the usefulness of the cue the reason for the poor performance?] \\nFurthermore, I am not sure that it is a valid method to train a model one cue and then test it on the original images. [How about testing] each cue expert under the scenario it has been trained on?**\\n\\n*Response:* \\nWe appreciate the detailed review and clarify the concerns with respect to 1) the color cue expert and 2) the cue evaluation method.\\n1) For the color cue expert we investigated FCN models with only 1x1-convolutions with different capacity to balance between over-parametrization, GPU RAM (80 GB) constraints and comparability.\\nAll models achieved a comparable performance why we at first refrained from including this into the paper. We will add the results in the appendix to demonstrate that the poor performance is not due to the model capacity.\\n2) We agree with the reviewer, that the domain shift in the evaluation method is worth investigating.\\nFor the shape cues this evaluation is reasonable since the transformation is per image and can be applied before processing the image by the cue expert. The main findings of this study have been described in the first paragraph of 4.2. 'Numerical Results' and the detailed table has only been presented in the appendix (cf. 'Comparison of Cue Influence Without Domain Shift') due to the page limitation. For the revised manuscript, we enlarged the table and included the domain-shift-free evaluation for the texture cue (combinations) as well.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"First, we would like to thank the reviewer for the constructive and positive feedback, which recognizes the comprehensiveness and scope of our study which \\\"sheds light on how [shape, color, and texture cues] contributes to segmentation performance's\\\". Furthermore, we appreciate the recognition of its relevance to domain adaptation and model failure prediction.\\nIn the following we will address the stated weaknesses and answer the reviewer's questions individually. All adaptations we made are highlighted in violet in the revised paper.\\n\\n**[W1] \\\"The study\\u2019s approach, while straightforward, lacks sufficient depth in its experimental design and interpretation[...]\\\"**\\n\\n*Response:* From our perspective, on one hand our work presents a detailed experimental setup and an analysis at varying scales. Our experimental setup provides mean segmentation performance over whole datasets and complements this with deeper analyses by looking into different classes, location dependence, influences of segmentation boundaries, influence of segment sizes within one class and lastly architecture dependence. Statistical evaluations are complemented with a number of qualitative results. \\nOn the other hand, we agree w.r.t. deeper insights that our manuscript offers potential for improvement. We conducted an additional experiment investigating the influence of backbone depth on the performance of cue-expert-models which allows for conclusions on the architecture choice depending on the dataset characteristics. We find that shape experts benefit from deeper neural networks. It also turns out that extracting the texture cue can be achieved well by rather shallow CNNs.\\n\\n\\n**[W2] Most of the findings [...] are valid but not particularly surprising and interesting\\\"**\\n\\n*Response:* We understand that some of our findings may seem intuitive. However, we would like to emphasize that this is the first study in semantic segmentation experimentally confirming these intuitions on diverse depth levels down to pixel level, which we believe is of particular value (see also reviewer o1oW).\\nOur study contributes to the fundamental understanding of how DNNs learn from image data. In particular, we provide new evidence that can further quantify intuitive and widely spread statements like \\\"shape cues correspond heavily to semantic boundaries\\\" which, to the best of our knowledge, have not been quantified before. In addition, our study reveals, that, from an information retrieval perspective, statements like CNNs being texture-loving do not hold, since neither shape nor texture alone clearly dominates the learning success (note that, this does not contradict with the finding of biases in pre-trained CNNs as studied in Geirhos et al.).\\nFurthermore, we would like to put emphasis on the novelty of our findings with respect to transformer models. We observe that transformer architectures are better suited than CNNs to extract information from a cue or a limited cue combination - no matter whether these cues are based on shape or texture.\\nHowever, qualitatively, i.e., in terms of the rankings of the different cue experts, there are no serious differences between CNNs and transformers (rank correlation of 0.973).\\nThis seems counterintuitive since transformers evaluated in context of classification are known to be more shape biased than CNNs (Tuli et al.). This indicates that in presence of a shape bias in semantic segmentation networks, this does not imply that transformers are less effective at learning from texture. We will adjust the narration in the manuscript to adjust the expectations of the reader.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This work studies the impact of shape and texture on training DNN. They develop a generic procedure to decompose a given dataset into multiple ones, each of them only containing either a single cue or a chosen mixture. they develop a generic procedure to decompose a given dataset into multiple ones, each of them only containing either a single cue or a chosen mixture. The study on three datasets reveals that neither texture nor shape clearly dominates the learning success, however, a combination of shape and color but without texture achieves surprisingly strong results. The findings hold for convolutional and transformer backbones. In particular, qualitatively there is almost no difference in how both of the architecture types extract information from the different cues.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper comprehensively analyzes the impact of shape, texture, gray and their combination on semantic segmentation tasks, and provides a method to derive a texture-only dataset. This paper compares the different effects of these image cues on CNN and Transformer.\\n2. The structure of the paper is relatively clear and the introduction of the methods is relatively detailed.\\n3. Studying the impact of different image cues on semantic segmentation is very meaningful for designing networks and contributes to the deep learning community.\", \"weaknesses\": \"1. This paper shows the effects of different visual cues on semantic segmentation, but lacks a detailed analysis of why these effects occur, and explores which modules and operations are introduced into the model to reduce these effects.\\n2. In Table 3, we find that the rank change of the CARLA dataset is large relative to the Cityscapes dataset. Please provide more explanations why different performances are shown on different datasets.\\n3. Figure 6 is not clear enough.\", \"questions\": \"1. After we have studied the impact of different visual cues on the segmentation effect of the model, can we use certain deep learning operations to specifically improve the effect of the model?\\n2. In Table 3, we find that the ranking changes of the CARLA dataset are larger than those of the Cityscapes dataset. Why do visual cues show such differences on different datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional discussion\", \"comment\": \"Thank you for your response. While it partially addressed my concerns about the experimental setup, I still find the manuscript lacking in-depth and meaningful insights. I greatly value experimental and analytical studies that thoroughly investigate fundamental problems\\u2014some of my own papers also fall into this category. However, such works must present a clear perspective or thesis. This paper, however, feels more like a straightforward experimental report, and it is not clear what key message or \\\"punchline\\\" the authors aim to convey. After reviewing the rebuttal, my assessment remains unchanged.\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**[W5] I feel like the paper lacks some practical takeaways or ideas for future work that would use the findings of this study for improving the performance or robustness of deep models for semantic segmentation.**\\n\\n*Response:*\\nOne of our motivations is to explore how DNNs perceive the world relates to contradiction of different sources of evidence. If the texture cue expert predicts a car but the shape expert predicts road, this can be a valuable source of redundancy and provide interpretable hints towards the safety of the overall prediction. To exploit this, a general understanding of the strengths and weaknesses of the various experts are needed which we have provided in the present study. Independent of the practical utility, our study contributes to the fundamental understanding of how DNNs learn from image data. \\nOur data decomposition offers insights into ambiguities across different cues. This enables our approach to quantify the complexity involved in learning a dataset, helping to identify inherent ambiguities and uncertainties in the data. We outline this use-case in the outlook section and provide exemplary contradiction heatmaps for unusual road objects for Cityscapes and CARLA in the appendix.\\n\\nIn addition, we think it is worth investigating how our analysis can be used to quantify the complexity of a learning task. We hypothesize that evaluating which and how many cues are needed to predict a specific class up to a certain accuracy can serve as a learning complexity quantity.\\n\\nTo gain deeper insights, we conducted a layer study revealing that shallow DNNs seem sufficient to learn from texture data whereas a comparably deeper architecture can learn more from shape cues.\\nBesides exploring different learning tasks, we believe that the cue influence of different sensors like hyperspectral or infrared cameras is interesting to explore. Furthermore, we also see additional value in the decomposed data for, e.g., evaluating pre-trained models w.r.t. texture or shape biases which is ongoing work. \\n\\n**[Q1]\\nDo you expect similar conclusions for panoptic segmentation as well? This might open some new questions. For example, if the same clues have equal effect on the performance in thing and stuff classes? Mask transformers [1] disentangle the panoptic inference into mask localization and classification. It would be interesting to see which clues are more important for localization, and which ones for classification. What are your thoughts about this and the future work?**\\n\\n*Response:*\\nThis is a very interesting question which allows for new comparisons and insights. Based on our findings for semantic segmentation, that the cue influence is more dependent on the pixel location rather than the class, we \\nhypothesize that a similar result might be observable for panoptic segmentation. However, since panoptic segmentation combines multiple learning tasks, it is worth investigating if the cue influence depends on the learning task by analyzing and comparing different model architectures including MaskFormer. From our perspective, both hypothesis are worth to investigate in more depth. We appreciate this idea and included it as future research in the manuscript.\\n\\n\\n**[Q2] [...] the transformer consistently outperforms the convolutional model by a large margin on datasets which consider only subsets of clues. What is the reason?**\\n\\n*Response:*\\nWe conjecture that the increased cue performance of the transformer model results from the increased cross domain performance as shown for Vision Transformers (Tvt by Yang et al.) and for semantic segmentation transformers (CDAC by Wang et al.). We added this statement to the paper to address this question.\\n\\n\\n**References:**\\n- Wang, Kaihong, et al. \\\"CDAC: Cross-domain attention consistency in transformer for domain adaptive semantic segmentation.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n- Yang, Jinyu, et al. \\\"Tvt: Transferable vision transformer for unsupervised domain adaptation.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023.\"}", "{\"summary\": \"The paper presents an analysis of the influence of shape, texture, and color on semantic segmentation performance, proposing a methodology that leverages augmented datasets to isolate these specific \\u2018cues\\u2019 and assess their individual contributions. To achieve this, the paper uses HSV channels to represent colors, edge detection to represent shapes, and ground-truth mask cropping to isolate textures. Through extensive combinations of these cues, the authors evaluate the performance of semantic segmentation using both real-world datasets and the CARLA simulator, applying this methodology to both convolutional and transformer-based architectures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is easy to read, with a significant level of detail dedicated to the experiments and results. The study is comprehensive in its scope, providing extensive combinations of cue-based datasets and evaluating them across different model architectures.\\n\\nBy isolating and recombining shape, color, and texture cues, the paper sheds light on how each of these elements contributes to segmentation performance. This could be beneficial for understanding data domain gaps and predicting failure modes of segmentation models.\", \"weaknesses\": \"The study\\u2019s approach, while straightforward, lacks sufficient depth in its experimental design and interpretation. The experiments primarily provide surface-level insights without delving into a more profound analysis of underlying factors.\\n\\nMost of the findings and corresponding discussion, for example, that shape cues correspond heavily to semantic boundaries and that textures play a significant role in broader regions\\u2014are valid but not particularly surprising and interesting to computer vision researchers. Most of the results presented in the paper fall into this category. The insights gained from this study feel somewhat limited, leaving the reader with few novel takeaways. As such, it may fall marginally below the acceptance threshold in its current form.\\n\\nThere are several technical and methodological concerns, please see Questions.\", \"questions\": \"Using Voronoi diagrams, assigned randomly to semantic classes, raises questions about whether the generated \\\"texture\\\" dataset truly reflects the original distribution of classes. How about assigning semantic classes to Voronoi diagrams by their frequency in the original dataset?\\n\\nThe reliance on an external edge detector for shape cues introduces external biases. For example, if the edge detectors are trained on semantic boundaries, using them will introduce additional semantic information. How about using low-level filter-based edge detectors?\\n\\nWhen shape and color are combined (no texture), the input image becomes piecewise constant. Since the segmentation maps are also piecewise constant, this operation simplifies the mapping between input and output. The authors present this result as surprising, yet it is somewhat intuitive given the characteristics of segmentation tasks.\\n\\nmIoU may not be a good metric to fully capture the performance across classes with different frequencies. A more robust analysis would include additional metrics such as frequency-weighted IoU and pixel accuracy.\\n\\nFigures 10 and 11 are interesting and important. I suggest moving them to the main paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the importance of different image cues (e.g. color, shape, texture) for successful training of deep semantic segmentation models. Different from previous work, this study does not focus on the analysis of conventionally pretrained models. Instead, it is based on \\\"expert\\\" models trained on custom datasets containing only one or a combination of selected cues. To enable this type of analysis, the paper proposes several approaches to transform original image datasets into versions that contain only specific cues. The analysis includes two real image datasets: Pascal Context and Cityscapes, as well as a single synthetic dataset based on CARLA which provides more control over the image generation process. Additionally, the study explores considers convolutional as well as transformer based architectures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Previous shape and textures bias studies rarely considered dense prediction tasks, so this looks like a valuable contribution.\\n\\nThe proposed study is the first that investigates the effect of individual cues (and different combinations of cues) on the training process of the semantic segmentation models.\\n\\nThe paper consolidates a method for extraction of different image cues from natural images. This enables transformation of the original datasets into variants that contain a single image cue or selected combination of cues.\", \"weaknesses\": \"Presentation quality could be improved.\\nI suggest placing the tables and figures right after being referenced in the text. \\n\\nI found the Texture Cue Extraction paragraph confusing. The main manuscript should include more details and be more descriptive. I am not sure how the Voronoi diagrams are created and do they depend on the content of the corresponding image. Are class frequencies and distributions preserved in this texture cue dataset? \\n\\nThe last paragraph in 4.1 should also describe the evaluation protocol. What kind of input each expert model considers during the evaluation? This is partly addressed at the end of the first paragraph in 4.2. However, I think it deserves a separate paragraph to clarify the edge cases, so that the reader has no uncertanties when considering the numerical results. I would also consider including the EED and HED results with processing in the corresponding table. \\n\\nI am not entirely convinced with the conclusions of the discussion about the influence of the size on the performance of texture and shape experts. The texture expert might specialize in these classes not because the size of their segments, but just because they are more frequent, or their texture is more unique and therefore easier to discriminate. I am not sure. It obviously depends on the properties of the training dataset. I think this requires further considerations.\\n\\nI feel like the paper lacks some practical takeaways or ideas for future work that would use the findings of this study for improving the performance or robustness of deep models for semantic segmentation.\", \"questions\": \"Do you expect similar conclusions for panoptic segmentation as well? This might open some new questions. For example, if the same clues have equal effect on the performance in thing and stuff classes? Mask transformers [1] disentangle the panoptic inference into mask localization and classification. It would be interesting to see which clues are more important for localization, and which ones for classification. What are your thoughts about this and the future work?\\n\\nIn table 2, the convolutional model and the transformer achieve similar performance when considering regular images with all cues. However, the transformer consistently outperforms the convolutional model by a large margin on datasets which consider only subsets of clues. What is the reason behind that?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies the importance of different image cues like color, shape, texture, etc., for the learning of deep semantic segmentation models. The manuscript was reviewed by four experts in the field. The recommendations are (2 x \\\"5: marginally below the acceptance threshold\\\", 2 x \\\"6: marginally above the acceptance threshold\\\"). The reviewers raised many concerns regarding the paper, e.g., unclear motivation and statement without in-depth analysis, unconvincing experimental evaluation results, etc. Considering the reviewers' concerns, we regret that the paper cannot be recommended for acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers mainly hold concerns regarding unclear motivation and statement (e.g., lack of in-depth and detailed analysis, no new insights) (Reviewer bPD6, pTje, o1oW), unconvincing experimental evaluation results (e.g., size on the performance of texture and shape experts, results are unclear and not surprising, validity of experiments) (Reviewer o1oW, j3xz). The authors' rebuttal could not fully address the above-listed concerns.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"We thank the reviewer for their constructive and positive feedback. We sincerely appreciate the recognition of the novelty of our study and method as well as the detailed review and thoughtful questions, which helped us to identify open questions and improve our manuscript. Below, we summarize and address the comments individually. All adaptations we made are highlighted in violet in the revised paper.\\n\\n**[W1] Presentation quality could be improved [by] placing the tables and figures right after being referenced.**\\n\\n*Response:* We rearranged some figure and table positions with respect to the layout feedback. We hope the adaptions meet the reviewer's expectation.\\n\\n**[W2] The main manuscript should include more details and be more descriptive [w.r.t. the Texture Cue Extraction]. How [are] the Voronoi diagrams created? Do they depend on the content of the corresponding image? Are class frequencies and distributions preserved in this texture cue dataset?**\\n\\n*Response:* We thank the reviewer for the constructive and precise feedback. To improve the comprehensibility of our texture extraction method, we added more details addressing the questions of the reviewer in the main manuscript in section 3 Texture (T) Cue Extraction as well as in the detailed explanation in the Appendix.\\nWe provide here a short answer to the last two questions.\\nThe Voronoi diagrams explicitly do not depend on the semantic contours of the corresponding image. In fact, the Voronoi diagrams serve as surrogate segmentation task to remove the original shapes in the scene. However, they could be replaced by any other polygonal partitioning. The texture patches, which are filled into the individual Voronoi cells, are gathered over multiple images to ensure a certain diversity and size. We explicitly refrained from preserving the class distribution to not bias the cue influence on class level. For reference, we trained Voronoi diagrams which are filled according to the base dataset class distribution. We kindly refer to our answer to [Q1] of reviewer bPD6 for more details. \\n\\n**[W3] [...] The evaluation protocol is partly addressed at the end of the first paragraph in 4.2. However, I think it deserves a separate paragraph to clarify the edge cases. What kind of input each expert model considers during the evaluation? \\n(I would also consider including the EED and HED results with processing in the corresponding table.)**\\n\\n*Response:*\\nWe agree with the reviewer that our paper can be improved by inserting a dedicated paragraph for the evaluation protocol. We appreciate this constructive feedback and expanded the manuscript in section 4.1. To not confuse the reader by switching between evaluation methods we exclude online transformation for data evaluation in tab. 2 and 3. Instead, we included a table to the appendix with all in-domain performances, i.e., the validation dataset is pre-processed with the same cue extraction method as the\\ntraining dataset. \\n\\n**[W4] I am not entirely convinced with the conclusions of the discussion about the influence of the size on the performance of texture and shape experts. The texture expert might specialize in these classes [...] because they are more frequent, or their texture is more unique and therefore easier to discriminate. [...] I think this requires further considerations.**\\n\\n*Response:*\\nOur analysis of the cue influence on different semantic classes revealed that the combination of T+C cues is not helpful to learn all classes but those which often cover large segments of the image and therefore frequently occur during training (cf. Fig.3). We appreciate the detailed review which brings into question the same aspects as we thought of: are there specific reasons to focus on these classes? \\nThe review revealed that our analysis had potential for a more concise description, so we revised the paper according to the following aspects: \\nFirstly, we describe more precisely that we generated the texture dataset with a uniform class distribution. To this end, the frequency of classes is nearly equal, and we conclude that the class frequency is not a main cause that texture experts specialize on a subset of classes. \\nSecondly, we investigated if the shape expert (S_EED-RGB) or the texture expert (T_RGB) is more useful for learning semantic segmentation with respect to the segment size within one single class (see Fig. 6). \\nFor the CARLA dataset where the shape and texture expert perform similarly well, we found that large segments within the frequent class road have a high segment-wise recall only for the texture expert. The same trend is visible within the rarely occurring class person.\"}", "{\"comment\": \"Thank you for your response and clarifications! After considering the authors' responses and feedback from other reviewers, I decided to maintain my score. As others have also noted, the study's insights are limited and lack practical takeaways, which is the primary reason for not assigning a higher score.\"}", "{\"comment\": \"We thank the reviewers for their comments on our response. We understood that the reviewers acknowledge the importance of our study however where not too surprised by our results. In the CV community however, there is the widespread word of \\\"texture biased CNN\\\". This of course has solid experimental foundation when massively changing the texture cue during inference. As we show consistently throughout our experiments, in training on real data the shape cue combined with color is most influential. We hope that this experimental insight is of value for the CV community to interpret CNN and ViT training.\\nAnother criticism was the in depth investigations of the phenomena observed in our experiments. In the rebuttal we provided further studies on pixel and architecture level which was not addressed in the reviewers' responses but provide meaningful insights. As demonstrated in Figure 6, 12 and 14, cue decomposition allows for a deeper understanding of where which cue has the highest influence. We argue, that by fusing information of different experts, the cause of potential failures is more interpretable. This understanding is particularly useful when shape and texture do not correspond, e.g., a giraffe without spots like Kipekee or a fancy car with strange shape like the cybertruck. There is a high potential that these are still segmented correctly with the help of the experts which are not affected by the shift like the shape expert for a giraffe. Motivated by the reviews, we propose an uncertainty measure and its qualitative evaluation in the last paragraph of \\\"Cue Influence Dependent on Location in an Image\\\" in section 4.2. We kindly invite the referees to consider this and comment.\\n\\nFinally, we would like to emphasize, that proving intuitions might seem to lack in-depth insights but is important since intuition can turn out to be (partially) incorrect and therefore their confirmation has a broader impact for the community.\"}", "{\"summary\": \"In this work, the authors try to answer an important question, which is what can DNNs learn from image cues such as shape, texture, and color. They argue that this is the opposite of what is typically done in previous literature, as previous works rather focus on understanding how much a trained neural network relies on different cues. In short, the aim of this paper is to understand what are the important cues for successful trainings.\\nTo answer this question, the authors propose to decompose a given dataset into multiple different sub-datasets, with each one including one or more cues to learn from. They set up different experiments (cue-experts) from different combinations of cues in the context of semantic segmentation and using both CNNs and Transformers.\\nThe outcome of these experiments is that there is no clear winner between relying on shapes rather than texture is essential for good performance, but rather a combination of shape and colors.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. In my opinion, this paper addresses a very interesting problem, namely, which are the cues to learn from. A thorough analysis could be beneficial for other problems such as explainability, Domain Adaptation, and Domain Generalization.\\n\\n2. The paper is well-written and clear, and I particularly appreciate Table 1 summarising all the possible settings.\\n\\n3. A good number of experiments across different datasets and architectures have been conducted.\", \"weaknesses\": \"1. My main concern about this paper is that the results are unclear. The rank change column is particularly helpful in understanding the current scenario, and all datasets clearly have different cue orders when taken individually across datasets. I understand that the authors claim that they do not drastically differ, but still, they are not consistent in drawing clear conclusions.\\n\\n2. The claimed results are not surprising, and I do not see a clear impact or the direction of this paper. For example, it is obvious to me that cues such as texture is overall better than color since texture also contains color information, as also stated by the authors. Furthermore, is not surprising that using all cues is better.\\n\\n3. One final major concern is about the validity of some experiments. In particular, on how evaluation is done for the color cue expert. To my understanding, in such cases, all convolutions have been replaced with 1x1 convolution kernel. These produce very poor results, but we do not know whether this is because of a model that does not have enough capacity or the color is not a useful cue.\\nFurthermore, I am not sure that it is a valid method to train a model one cue and then test it on the original images. In some cases, this could make sense, but in other cases, the distribution shift between source and target images could be so high that the model is not able to perform well.\", \"questions\": \"1. When exacting the cue colors, is 1x1 convolution used for all layers or only for the first? In the former case, it is really hard to understand whether the network has enough capacity to learn a difficult task such as semantic segmentation, while the latter case would invalidate the experiment as the following layers would have a larger receptive field.\\n\\n2. The second part of the sentence at L193: \\u201cAs it preserves color, it extracts the cues S+V+HS, and analogously to the treatment of the C cue provides the cues S+V as well.\\u201d is not clear to me.\\n\\n3. About weaknesses 3, I wonder whether it is possible to test each cue expert under the scenario it has been trained on, and not on the original images. Maybe the authors could clarify this point and motivate this choice.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response. After thoroughly reviewing your comments and considering the concerns raised by other reviewers, I remain convinced that the paper lacks sufficient depth in analysis and results. While the experiments provide some high-level insights, I believe they are not enough for a publication. As such, I have decided to maintain my current score.\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**[Q1] When exacting the cue colors, is 1x1 convolution used for all layers or only for the first? In the former case, it is really hard to understand whether the network has enough capacity to learn a difficult task such as semantic segmentation, while the latter case would invalidate the experiment as the following layers would have a larger receptive field.**\\n\\n*Response:*\\nWe restrict the color cue model to 1x1-convolutions for the mentioned reason, to not invalidate the experiments by an enlarged field of view. As discussed for [W3] we investigated models with different capacities to mitigate the addressed problem.\\n\\n**[Q2] The second part of the sentence at L193 [...] is not clear to me.**\\n\\n*Response:*\\nWe revised the sentence to improve readability.\\n\\n**References:**\\n- Robert Geirhos et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" ICLR, 2018.\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**[Q1] Using Voronoi diagrams, assigned randomly to semantic classes [...]. How about assigning semantic classes to Voronoi diagrams by their frequency in the original dataset?**\\n\\n*Response:* We intentionally decided to fill the Voronoi diagrams with a uniform class distribution to prevent a bias towards more frequently occurring classes during training of the cue expert and allow to learn all textures equally well. This enabled us to exclude the class frequency as a root cause for the specialization of the texture expert on a subset of classes. Following the suggestion of the reviewer, we trained additional experts on Voronoi diagrams filled according to the original Cityscapes class frequencies, leading to a worse performance when evaluating on all cues, i.e., original Cityscapes. On class level, the classes \\\"road\\\", \\\"wall\\\", \\\"car\\\" and \\\"train\\\" gained performance whereas the classes \\\"sky\\\", \\\"bicycle\\\", \\\"traffic sign\\\", \\\"bus\\\" and \\\"sidewalk\\\" loose the most performance measured in terms of IoU.\\n\\nclass distribution | T_V | T_HS | T_RGB\\n-------- | ------- | ----- | ----\\nCityscapes | 14.00 $\\\\pm$ 1.74 | 19.04 $\\\\pm$ 2.12 | 17.53 $\\\\pm$ 1.03\\nuniform | 17.85 $\\\\pm$ 1.30 | 20.63 $\\\\pm$ 1.41 | 20.10 $\\\\pm$ 0.98\\n\\nThe ranking within the different texture experts trained on Voronoi diagrams filled according to the Cityscapes class distribution does not change, however due to the reduced overall performance T_HS and T_RBG drop one position in the ranking, respectively. The general findings for the cue influence on different semantic classes are not affected by the class distribution.\\n\\n\\n**[Q2] The reliance on an external edge detector for shape cues introduces external biases. [...] How about using low-level filter-based edge detectors?**\\n\\n*Response:* We share the reviewer\\u2019s concern that edge detectors potentially introduce biases. We decided on up to three different types of shape extraction methods to disentangle cues as much as possible: a learned contour extraction method, a PDE-based low-pass filter and for the synthetic dataset a modified texture rendering. We chose the AI-based HED contour detection method over traditional filter-based edge detectors, since the latter tend to capture more texture information than HED (cf. Harary et al., Figure 3). To address the issue of semantic biases, we also applied the PDE-based low-pass filter, Edge Enhancing Diffusion (EED), as an additional method that extracts shape information of images and removes texture information (cf. S_EED-RGB expert). This method solely operates on the image information. We refer to the appendix for details on the EED method.\\n\\n**[Q3] When shape and color are combined (no texture), the input image becomes piecewise constant and therefore simplifies the task. The authors present this result as surprising, yet it is somewhat intuitive.**\\n\\n*Response:* Note that shape images with reduced color information (S_EED-HS and S_EED-V) have the same characteristic of being approximately piecewise constant. Here, we found it surprising that the models trained on these datasets with decreased color information perform significantly worse than the ones trained on S_EED-RGB with full color information. We acknowledge that our formulation has been ambiguous but have clarified that point.\\n\\n**[Q4] mIoU may not [...] fully capture the performance across classes with different frequencies.**\\n\\n*Response:* We have addressed this issue by studying frequency-weighted IoU (fwIoU) which has no significant influence on the general order of the results for the real-world dataset. We calculated the rank correlation between mIoU and fwIoU which results in a very high correlation with a Pearson correlation coefficient of 0.965 to 0.99 for the real-world dataset. For CARLA, we observe a lower rank correlation of 0.81 and see that the texture has an increased cue influence which aligns with our findings that the T cue is mostly valuable for larger segments. We included the results in the appendix.\\n\\n**[Q5] Figures 10 and 11 are interesting and important. I suggest moving them to the main paper.**\\n\\n*Response:* Thank you for recognizing the importance of Figures 10 and 11; we appreciate your suggestion to include them in the main paper. However, page constraints and an appropriate scale for readability make it challenging. We will explore ways to integrate them in the non-blind version. Either way, we enhanced their visibility by referencing them and summarizing key insights in the main body. \\n\\n\\n**References:**\\n- Sivan Harary et al. \\\"Unsupervised domain generalization by learning a bridge across domains.\\\" CVPR, 2022.\\n- Robert Geirhos et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" ICLR, 2018.\\n- Shikhar Tuli et al. \\\"Are Convolutional Neural Networks or Transformers more like human vision?\\\", arXiv:2105.07197, 2021.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"First, we would like to thank the reviewer for the constructive and positive feedback, which acknowledges the comprehensiveness of our study and its contribution to the deep learning community. In the following, we comment on the stated weaknesses and how we address them. At the end we answer the reviewer's questions. All adaptations we made are highlighted in violet in the revised paper.\\n\\n**[W1] and [Q1] This paper shows the effects of different visual cues on semantic segmentation, but lacks a detailed analysis of why these effects occur, and explores which modules and operations are introduced into the model to reduce these effects. \\nAfter we have studied the impact of different visual cues on the segmentation effect of the model, can we use certain deep learning operations to specifically improve the effect of the model?**\\n\\n*Response:* One of our primary goals is to demonstrate that the way DNNs perceive the world can be broken down into distinct sources of evidence. In our study, we analyze the influences of the inherent cues in an image and address the questions \\\"What can be learned from individual cues? How much does each cue influence the learning success? And what are the synergy effects between different cues?\\\"\\nThe weakness and question stated by the reviewer addresses a somewhat different perspective, namely an analysis from the model rather from the data perspective. This opens up diverse research opportunities which would constitute a standalone paper. For a deeper analysis of the cue influence on the learning success we conducted an additional experiment which relates to the aspect mentioned by the reviewer and was included into the manuscript. We investigated the influence of backbone depth on the performance of cue-expert-models. The experiment shows that shape experts profit from deeper neural networks. In contrast, for datasets with dominating texture features shallow networks might even improve the overall performance. \\n\\n\\n**[W2] and [Q2] The rank change of the CARLA dataset is large relative to the Cityscapes dataset. Why do visual cues show such differences on different datasets?**\\n\\n*Response:* Although our results show a general trend in the cue influences across all datasets (\\\"C experts are mostly dominated by T experts as\\nwell as S experts, and those are in turn dominated by S+T expert\\\"), each dataset has its own characteristics. These characteristics influence the specific cues. The resulting rank changes are more pronounced in domains with greater distinctions, such as synthetic versus real-world data, but are still highly correlated (Pearson correlation coefficient of 0.934). Unlike Cityscapes and PASCAL Context, the CARLA dataset is synthetic, captured in a rendered city based on a limited amount of textures, assets and synthetic lighting conditions. This leads to less diverse but more discriminatory texture and shape which we discuss in \\\"Cue Influence Dependent on Location in an Image\\\". \\n\\n\\n**[W3] Figure 6 is not clear enough.**\", \"response\": \"In figure 6, we visualize the cue influence w.r.t. the segment size within one class. We thank the reviewer for pointing out that this figure needs more clarification. We updated our analysis description accordingly in the paper in paragraph \\\"Cue Influence Dependent on Location in an Image\\\".\\nThe results are visualized for a frequently occurring class as well as a rare class in terms of pixel count. We investigated if the shape expert (S_EED-RGB) or the texture expert (T_RGB) is more useful for learning semantic segmentation with respect to the segment size within one single class. \\nFor the CARLA dataset, where the shape and texture expert perform similarly well, we found that large segments of the class road have a high segment-wise recall for the texture expert whereas the segment-wise recall of the shape expert drops for larger segments. A similar trend can be observed for the rare class \\\"person\\\". We conclude consistency independent of the occurrence frequency of the class.\"}" ] }
5M0ic2RxQZ
dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation
[ "Emanuele Zangrando", "Sara Venturini", "Francesco Rinaldi", "Francesco Tudisco" ]
Low-rank adaptation methods are a popular approach for parameter-efficient fine-tuning of large-scale neural networks. However, selecting the optimal rank for each layer remains a challenging problem that significantly affects both performance and efficiency. In this paper, we introduce a novel bilevel optimization strategy that simultaneously trains both matrix and tensor low-rank adapters, dynamically selecting the optimal rank for each layer. Our method avoids the use of implicit differentiation in the computation of the hypergradient, and integrates a stochastic away-step variant of the Frank-Wolfe algorithm, eliminating the need for projection and providing identifiability guarantees of the optimal rank structure. This results in a highly efficient and cost-effective training scheme that adaptively allocates the parameter budget across the network layers. On top of a detailed theoretical analysis of the method, we provide different numerical experiments showcasing its effectiveness.
[ "bilevel optimization", "parameter efficient fine-tuning", "low-rank" ]
Accept (Poster)
https://openreview.net/pdf?id=5M0ic2RxQZ
https://openreview.net/forum?id=5M0ic2RxQZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zt59PIVO4X", "oBekTAXifI", "mV6iIuiG5I", "fuTAP5xr4x", "TeZOPm5nel", "SV6QYfFU4C", "P41vaciZaD", "MTxq0kvtH1", "Lw2ps7GfV9", "KZnGEWyqfQ", "8Y25D4LwRH", "6qDApX9oNg" ], "note_type": [ "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734881034310, 1730389172746, 1730629566857, 1732704722929, 1731714315836, 1730456131314, 1731712952932, 1737524079111, 1731714580280, 1732700505648, 1731821819427, 1732588062122 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10819/Area_Chair_a7Gb" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_1wqr" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_G2LN" ], [ "ICLR.cc/2025/Conference/Submission10819/Authors" ], [ "ICLR.cc/2025/Conference/Submission10819/Authors" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_AG3z" ], [ "ICLR.cc/2025/Conference/Submission10819/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10819/Authors" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_1wqr" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_G2LN" ], [ "ICLR.cc/2025/Conference/Submission10819/Reviewer_AG3z" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces dEBORA, a bilevel optimization-based method that dynamically selects the optimal rank for each layer in parameter-efficient fine-tuning of large neural networks. By employing a hypergradient approximation that avoids large-scale computation, dEBORA significantly reduces the computational burden. Additionally, it integrates a stochastic away-step variant of the Frank-Wolfe algorithm, which eliminates the need for projection and ensures the identifiability of the optimal rank structure. Theoretical analysis confirms the convergence and optimality of the approach, while experimental results demonstrate that dEBORA outperforms existing low-rank adaptation methods in both efficiency and performance across various benchmarks.\", \"selected_strengths\": [\"The paper presents a novel approach to low-rank adaptation by combining bilevel optimization with a dynamic rank-selection strategy, effectively addressing the challenge of parameter-efficient fine-tuning in large neural networks. As such, the paper addresses a longstanding issue in LoRA: selecting the optimal rank for each low-rank adapter, a key factor in balancing model performance and computational efficiency.\", \"The experimental results are robust and cover a wide range of benchmarks. With fewer parameters, the proposed method demonstrates competitive or superior performance compared to existing approaches such as LoRA and AdaLoRA.\", \"An interesting new application of the Frank-Wolf method.\"], \"selected_weaknesses\": [\"Lack of clarity in the use of some symbols and definitions of variables, which makes it difficult for readers to follow the proofs. Some formatting/style recommendations were made by the reviewers.\", \"While the authors mention the recent work BiLoRA, comparison to this method was not done in the experiments.\", \"While selecting the optimal rank is indeed critical for LoRA, in many practical applications, concerns about memory consumption and training time are often more pressing. The proposed optimization approach suggests a reduction in parameter count, but the practical impact on memory usage remains unclear.\", \"Is the method applicable to even larger models?\", \"Some questions regarding Theorem 4.1\", \"Several more minor issues were raised\", \"All three reviewers recommended acceptance, with scores 6, 6, and 8. I have read the reviews, rebuttals, and discussion. I believe the responses were reasonable. The paper is in a good shape to be accepted to the conference.\"], \"additional_comments_on_reviewer_discussion\": \"Several questions were asked by the reviewers (e.g, it is unclear how rank is adjusted after determining the optimal solution $s$; questions about notation, theory, assumptions). The key questions were answered satisfactorily.\"}", "{\"summary\": \"This work introduces a compact bilevel optimization framework for low-rank adaptation, offering an efficient fine-tuning strategy for large-scale neural networks via dynamic rank selection. Theoretical analysis provides convergence guarantees, and numerical experiments demonstrate that proposed method achieves performance comparable to advanced approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"A relatively fresh application of the Frank-Wolf method.\"], \"weaknesses\": \"See questions below.\", \"questions\": [\"Line 150-151, maybe it's better to use a figure to visualize the bypass in the neural networks.\\\\\", \"Line 159, what's the intuition of splitting dataset to two parts?\\\\\", \"Line 249, I understand you are focusing on the (stochastic) Frank-Wolf with mini batch of data, have you considered the full batch size?\\\\\", \"Line 472, \\\"randomly partitioned the dataset into equally sized subsets\\\", did you ensure the label balance for both dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces dEBORA, a bilevel optimization-based method that dynamically selects the optimal rank for each layer in parameter-efficient fine-tuning of large neural networks. By employing a hypergradient approximation that avoids large-scale computation, dEBORA significantly reduces the computational burden. Additionally, it integrates a stochastic away-step variant of the Frank-Wolfe algorithm, which eliminates the need for projection and ensures the identifiability of the optimal rank structure. Theoretical analysis confirms the convergence and optimality of the approach, while experimental results demonstrate that dEBORA outperforms existing low-rank adaptation methods in both efficiency and performance across various benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a novel approach to low-rank adaptation by combining bilevel optimization with a dynamic rank-selection strategy, effectively addressing the challenge of parameter-efficient fine-tuning in large neural networks. Furthermore, the avoidance of implicit differentiation through an effective hypergradient approximation is a significant strength, as it reduces both computational costs and complexity.\", \"The experimental results are robust and cover a wide range of benchmarks.\"], \"weaknesses\": \"- Despite providing several theoretical guarantees, the paper lacks clarity in the use of some symbols and definitions of variables, making it difficult for readers to follow the authors' proofs. For instance, the notation $\\\\otimes$ first appears in line 141, while $\\\\odot$ is introduced for the first time in line 215 without prior definition. Similarly, $\\\\Delta$ is introduced in Equation (8) but is only defined in Theorem 6.2, and the variable $V$ is not defined in Equation (21). Additionally, the definition of $\\\\mu$ is missing in Theorem 6.5. The authors are encouraged to carefully review the paper and provide clear definitions or explanations for variables and symbols, either upon their first appearance or in a comprehensive notation table.\\n- Although the overall structure of the paper is reasonable, the authors should consider removing the numbering from equations that are not referenced, as excess numbering creates a cluttered appearance. Furthermore, Equations (30) and (31) are noticeably smaller than the other equations, leading to inconsistencies in formatting. Additionally, the various variables within these equations lack definitions, making it challenging and time-consuming for readers. A careful review of whether each equation requires numbering, along with a focus on maintaining uniform formatting, would enhance the readability of the paper.\\n- While the authors mention the recent work BiLoRA (Qiang et al., 2024), no corresponding comparison is presented in the experiments. It would be valuable to understand the rationale behind this omission. Furthermore, while the authors state that they compare against the Pfeiffer adapter (Pfeiffer et al., 2021) and Houlsby adapter (Houlsby et al., 2019), the results do not clearly reflect these comparisons. Clarifying this point and providing results for these benchmarks would strengthen the experimental section.\\n\\n**References:**\\n\\n1. Rushi Qiang, Ruiyi Zhang, and Pengtao Xie. BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models. arXiv preprint arXiv:2403.13037, 2024.\\n2. Jonas Pfeiffer, Aishwarya Kamath, Andreas R\\u00fcckl\\u00e9, Kyunghyun Cho, and Iryna Gurevych. Adapterfusion: Non-destructive task composition for transfer learning, 2021. URL https://arxiv.org/abs/2005.00247.\\n3. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 2790\\u20132799. PMLR, 09\\u201315 Jun 2019. URL https://proceedings.mlr.press/v97/houlsby19a.html.\", \"questions\": \"1. The paper introduces a strategy for dynamically selecting the optimal rank $r$ for each layer. However, it is unclear how rank $r$ is adjusted after determining the optimal solution $s$. Is $r$ recalibrated based on the optimal $s$, or is there a specific strategy for automatically tuning $r$? Could the authors clarify this process in detail?\\n\\n2. In Theorem 4.1, the authors state, \\\"assume that the gradient is locally approximately constant.\\\" However, the subsequent equations appear to be formulated in terms of the Hessian. Could the authors clarify the relationship between these two concepts?\\n\\n3. Regarding equation (3), if the matrix $\\\\mathcal{B}$ is constrained to lie on a manifold\\u2014as mentioned later in the experiments with the Oblique and Stiefel manifolds\\u2014would the first-order optimality condition still hold as stated? This consideration could significantly impact the approximations and key results presented in the paper. Could the authors clarify how the manifold constraints influence the formulation of the first-order optimality condition?\\n\\n4. In equation (35), is it necessary for the authors to clarify the invertibility of the product $AB$? Providing a discussion on this aspect would enhance the understanding of the conditions under which the equation holds.\\n\\n5. In line 755, the notation $||\\\\partial_{\\\\mathcal{B}}f_{2}^{\\\\*}||$ should be revised to $||\\\\partial_{\\\\mathcal{B}}f_{1}^{\\\\*}||$. Additionally, could the authors clarify where the boundedness of this term originates?\\n\\n \\n\\nThe article contains several typographical and punctuation errors that require careful review by the authors. For example:\\n\\n- Punctuation errors: in lines 208, 240, 319, 401, and 407, there are missing commas. In line 686, there is an extra period.\\n- Line 679: \\\"By\\\" should be \\\"by\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. We thank the reviewer for the clarification. Since the comment was about the peak memory usage observed during training, we would like to remind you that we added plots in Appendix G of the revised manuscript concerning real GPU memory consumption and other statistics. As the plot shows, our effective memory usage on GPU is lower than AdaLoRA: this is due to its rank truncation procedure, which necessitates temporarily storing in memory the sensitivity measure, which effectively from a memory point of view is exactly a copy of the weight adapters.\\n\\n2. The feasible set of our problem is a simple polytope $S = \\\\textbf{conv}(0,e_1,\\\\dots,e_r)$ with $e_i$ the $i$-th element of the canonical basis in $\\\\mathbb{R}^r$.\\nSo the problem we need to solve in order to get our Frank-Wolfe direction (see Step 4 of Algorithm 2) is the following linear program:\\n$$ \\\\quad z_n = \\\\arg\\\\min_{s \\\\in S}\\\\widetilde G(s_n)^\\\\top s $$\\nTaking into account the fundamental theorem of linear programming, the solution will thus be on a vertex of our polytope. By focusing on the vertices, we have $z_n=e_{i_n}$ with $i_n=\\\\arg\\\\min_i \\\\nabla_i \\\\tilde G(s_n) $, if there exists at least one component $i$ s.t. $\\\\nabla_i \\\\tilde G(s_n)<0$, and $z_n=0$ otherwise.\\n\\\\\\nTherefore, the time complexity corresponds to finding the smallest entry in the hypergradient vector (that can be done simply by calling a torch.argmin function on $\\\\widetilde G(s_n)$), which has a ${\\\\cal O}(r)$ cost as highlighted in Appendix G. Notice that, a cost ${\\\\cal{O}}(r)$ is negligible since the computation of the hypergradient itself has a complexity of ${\\\\cal{O}}(Lrn^2)$, and in typical applications we have $n\\\\gg r$ (the size of the original matrices is usually much bigger than the rank correction).\\n\\nWe hope we were able to clarify your doubts, and if not please let us know.\"}", "{\"comment\": \"We wish first of all to thank the reviewer for all the comments and questions. Below, we provide a point-by-point response to questions raised. Moreover, we uploaded a revised version of the manuscript including additional results and some additional clarification.\\n\\n1. Thank you for pointing out this unclear aspect of memory. Could you please clarify what type of memory consumption you are suggesting? In theory, our method requires exactly the same memory consumption as AdaLoRA. To show this further, we included in Appendix G of the revised manuscript a complexity analysis of our method together with some GPU statistics during training (time vs accuracy, time vs allocated memory and time vs power consumption) on the GLUE benchmark. We hope the added results clarify your concerns about memory usage.\\n\\n2. Thanks for pointing this out. A key advantage of FW being a projection-free method is its computational efficiency (see for example [Combettes & Pokutta, 2021]). \\nIn order to provide a clear theoretical analysis of the costs as compared to e.g. LoRA and AdaLoRA, we included in Appendix G a detailed discussion on this matter together with plots of real GPU usage.\\nWe appreciate the reviewer's interest in having an experimental evaluation on larger models such as Llama-2-7B. We are trying to set up the experiments and we will do our best to post here some (at least initial) results before the end of the discussion period, conditionally on our availability of computing resources.\\n\\n3. Theorem 4.1 provides a bound on how much the error in the hypergradient is affected by the curvature of the problem. In practice, if $K$ is small in $S$, that is, the gradient is approximately locally constant, then the error we perform with the closed-form solution $G$ for the hypergradient is small and decays at least linearly in $K$. All the assumptions in the statement of the theorem are necessary to understand how the curvature affects this error. However, we agree it is not obvious how to verify these assumptions in a practical setting. Nonetheless, empirically, we observed in our experiments that the approximate hypergradient estimation is a good proxy and behaves effectively.\\n\\n4. $\\\\tau$ was selected of order of the starting rank $O(r)$, leaving in this way potentially $O(1)$ for each entry of $s$. We added the precise value of $\\\\tau$ used in the description of the experimental results in Sec7.\", \"what_we_do_in_practice_is\": \"motivated by the theoretical result in 6.5 about the identifiability of the optimal rank in a finite number of steps, we run Algorithm 1 for a small number of warm-up steps $n_0>0$ with the initial rank and then truncate $s_n$ (and $U$ and $V$ accordingly) when $n\\\\geq n_0$ by removing the entries of $s_n$ that are smaller than the precision $\\\\varepsilon$. In this way, we reduce to a problem with a smaller rank (as $s$ is now smaller and so are $U$ and $V$). The dimension of $s_n$ we have when the algorithm stops is the final rank $r^*$ of the model.\\nIn this way, the memory requirement is the same of AdaLoRa for a finite number of steps, and for the rest of the optimization the rank is reduced to $r^*$ (which as we can observe in Tabs 1,2, it often leads to much lower number of parameters).\\nWe apologize if this was not clear, we added a line for clarifying this point in the section containing the details of the algorithm.\\n\\n5. We are unsure here; the vector is already restricted to nonnegative values. Can you please clarify?\\n\\n[Combettes & Pokutta, 2021] Cyrille W Combettes and Sebastian Pokutta. Complexity of linear minimization\\nand projection on some sets. Operations Research Letters, 49(4):565\\u2013571,\\n2021.\"}", "{\"summary\": \"This paper tackles the problem of optimal rank selection in low-rank adaptation methods, which is essential for efficient fine-tuning of large-scale neural networks. The authors propose a bilevel optimization strategy that trains matrix and tensor low-rank adapters while dynamically selecting the best rank per layer. Their approach avoids implicit differentiation, uses a stochastic variant of the Frank-Wolfe algorithm, and provides identifiability guarantees for the optimal rank structure. Theoretical analysis and numerical experiments support the method's efficiency and effectiveness, showcasing adaptive parameter allocation across network layers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper addresses a longstanding issue in LoRA: selecting the optimal rank for each low-rank adapter, a key factor in balancing model performance and computational efficiency. Whereas traditional PEFT algorithms require manual tuning of this parameter, the authors propose a criterion based on optimization.\\n2.\\tTo address this issue, it introduces a bilinear optimization approach that alternates between optimizing the LoRA matrices and their ranks, presenting a novel contribution to the PEFT community.\\n3.\\tWith fewer parameters, the proposed method demonstrates competitive or superior performance compared to existing approaches such as LoRA and AdaLoRA.\", \"weaknesses\": \"[key issue] While selecting the optimal rank is indeed critical for LoRA, in many practical applications, concerns about memory consumption and training time are often more pressing. The proposed bilinear optimization approach suggests a reduction in parameter count, but the practical impact on memory usage remains uncertain. A detailed comparison of memory consumption in real-world scenarios would strengthen the paper\\u2019s contribution.\\n\\n[key issue] The proposed method has been evaluated on architectures such as DeBERTa and ResNet. However, the reviewer is interested in its applicability to even larger models. Specifically, would this bilevel optimization approach introduce significant additional computation time? It would be beneficial if the authors could provide both theoretical analysis and empirical results addressing this question. For instance, would the use of a Frank-Wolfe optimization strategy substantially increase training time on large-scale models such as LLaMA2?\\n\\nIn Theorem 4.1, the condition K\\u22650 alone appears insufficient to ensure that the gradient is locally constant. Additionally, the assumptions regarding the uniform invertibility and boundedness of the bilinear operator merit further examination for practical feasibility.\\nThe parameter \\u03c4 is tied to the rank selection process, yet its practical setting and impact on memory usage are not fully clarified. \\n\\nCould the authors discuss guidelines for selecting \\u03c4 in practice and elaborate on its relationship with memory requirements?\\n\\nIn line 154, should the parameter s be restricted to nonnegative values? Further clarification on this point would be helpful.\", \"questions\": \"See the above \\\"weakness\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We wish first of all to thank the reviewer for all the comments. Below, we provide a point-by-point response to the weaknesses and questions raised.\\n\\n* Weaknesses:\\n1. We apologize for the confusion with the notation. $\\\\otimes$ refers to the outer product of tensors, and $\\\\odot$ to the Hadamard product (entrywise); we assumed this was a standard notation, but we agree it would be much better to clarify. We now define both of these operations at their first occurrence.\\nAbout $\\\\Delta$, we now define it at the beginning of Section 6, right after the description of the optimization problem (4). For what concerns $V$, we added a sentence with the definition before its first appearance.\\nThe $u$ in theorem 6.5 was just a typo; we deleted it in the updated version.\\n2. This is a fair point which we agree with. We removed the numbering from each equation that is not referenced in the text.\\n3. Thanks for raising this point. Concerning Pfeiffer and Houlsby adapters, we apologize for this. We simply accidentally forgot to add them to the table, and we realized that only after the submission deadline. We added them in the revised version.\\nWe cited BiLoRA for fairness as it is directly related to our approach. However, their implementation is not public yet, and we were not able to reproduce their results. Also, please note that they calculate the hypergradients using implicit differentiation; thus, their method is computationally strictly more demanding than ours. \\n\\n* Questions:\\n\\n1. We apologize that this was not clear. What we do in practice is: we run Algorithm 1 for a small number of warm-up steps $n_0>0$ with the initial rank and then truncate $s_n$ (and $U$ and $V$ accordingly) when $n\\\\geq n_0$ by removing the entries of $s_n$ that are smaller than the precision $\\\\varepsilon$. In this way, we reduce to a problem with a smaller rank (as $s$ is now smaller and so are $U$ and $V$). The dimension of $s_n$ we have when the algorithm stops is the final rank $r^*$ of the model.\\nAs discussed with reviewer 1wqr, this procedure is theoretically justified by Theorem 6.5, which guarantees the identifiability of the optimal rank (face of the $L^1$ simplex) in a finite number of steps. \\nWe have modified Algorithm 1 by adding step 10, which clarifies this point, and have added a sentence to clarify this in the description on line 306.\\n\\n2. The subsequent equation is there to clarify what \\\"locally approximately constant gradient\\\" means. In other terms, we require the Hessian to be small enough in the ball $S$. These two concepts are related as a zero Hessian on a connected set would imply that the gradient is a constant function (and vice-versa). In Theorem 4.1 we just make explicit the control on the error based on the norm of the Hessian.\\n\\n3. This is a fair point. For the sake of simplicity, we decided to limit the theoretical discussion to the Euclidean case. However, everything transfers with minor adjustments to the Riemannian setting by interpreting all the derivatives through their Riemannian counterpart.\\nA similar derivation is done for example in [Li & Ma, 2024] and the computations in our case would be similar to those presented there.\\n In particular, in our setting the only Riemannian manifold is in the lower-level problem. In this case the stationarity condition should be interpreted as $\\\\partial_{\\\\mathcal B} f_2(\\\\mathcal B^*(s),s) = 0$ where $\\\\partial_{\\\\mathcal B}f_2(\\\\mathcal{B}^*(s),s)$ is now the differential restricted to the tangent space $T_{\\\\mathcal B^*(s)} \\\\mathcal V$ to the manifold $\\\\mathcal V$ at the point $\\\\mathcal B^*(s)$, with $\\\\mathcal V$ either the Stiefel or the oblique manifold in our case. The implicit gradient equation (3) is derived in the same way by interpreting the Hessian as the Riemannian Hessian on $\\\\mathcal V$. We thank the reviewer for pointing out this potential source of confusion. We now clearly state this at the end of Section 3.\\n\\n4. Apologies, but we could not find the inverse of $AB$ in the proof; we assume the reviewer is referring to $CD$, whose invertibility is actually one of the assumptions of the theorem.\\n\\n5. Thanks to the reviewer for noticing this typo. Concerning the boundeness, Oblique of Stiefel manifolds are compact, thus their boundeness follows immediately from continuity.\\n\\n6. We thank the reviewer again, we corrected the typos in the revised version.\\n\\n\\n[Li & Ma, 2024] Jiaxiang Li and Shiqian Ma. Riemannian bilevel optimization. arXiv preprint\", \"arxiv\": \"2402.02019, 2024\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"First of all, we would like to thank the reviewer for all the comments. Below, we provide a point-by-point response to questions raised.\\n\\n1. That is a nice idea, we are working on an illustration to add.\\n2. Splitting the dataset in two parts is not necessary, it is possible to use the full training dataset on both levels of the problem ($\\\\mathcal D_1 = \\\\mathcal D_2 = \\\\mathcal D$). It just makes the formulation more general, and it allows us to use a smaller dataset on the lower level, making the hypergradient computation cheaper.\\n3. Our goal is to minimize an objective function defined as the finite sum of a set of functions. The main challenge here lies in the high computational cost: calculating the objective value or its gradient requires aggregating information across all functions in the set, which is substantially more expensive than evaluating these quantities for a single function or a bunch of them (like in our Stochastic Frank-Wolfe approach). This is especially problematic when the set of functions is large, as each computation scales with the number of functions, quickly making optimization infeasible in practice. This is the main reason why also Stochastic Gradient fits better than Batch Gradient when dealing with Machine/Deep Learning applications. We now highlight this at the beginning of Section 5 of the revised manuscript.\\n4. Yes, label balance was ensured.\"}", "{\"comment\": \"Thank you for your patient response and addressing my concerns.\"}", "{\"comment\": \"Thank you for the timely and patient responses. All of my concerns have been thoroughly addressed, and I greatly appreciate the efforts made. I will revise my scores.\"}", "{\"comment\": \"1. By memory consumption, the reviewer refers to the peak memory usage observed during training, which often constitutes a critical bottleneck in the training of large language models (LLMs).\\n\\n2. Could you elaborate on the time complexity and practical time consumption associated with the Frank-Wolfe direction search?\"}" ] }
5LvTfc4fBz
Physics-enhanced Neural Operator: An Application in Simulating Turbulent Transport
[ "Shengyu Chen", "Peyman Givi", "Can Zheng", "Xiaowei Jia" ]
The precise simulation of turbulent flows is of immense importance in a variety of scientific and engineering fields, including climate science, freshwater science, and the development of energy-efficient manufacturing processes. Within the realm of turbulent flow simulation, direct numerical simulation (DNS) is widely considered to be the most reliable approach, but it is prohibitively expensive for long-term simulation at fine spatial scales. Given the pressing need for efficient simulation, there is an increasing interest in building machine learning models for turbulence, either by reconstructing DNS from alternative low-fidelity simulations or by predicting DNS based on the patterns learned from historical data. However, standard machine learning techniques remain limited in capturing complex spatio-temporal characteristics of turbulent flows, resulting in limited performance and generalizability. This paper presents a novel physics-enhanced neural operator (PENO) that incorporates physical knowledge of partial differential equations (PDEs) to accurately model flow dynamics. The model is further refined by a self-augmentation mechanism to reduce the accumulated error in long-term simulations. The proposed method is evaluated through its performance on two distinct sets of 3D turbulent flow data, showcasing the model's capability to reconstruct high-resolution DNS data, maintain the inherent physical properties of flow transport, and generate flow simulations across various resolutions. Additionally, experimental results on multiple 2D vorticity flow series, generated by different PDEs, highlight the transferability and generalizability of the proposed method. This confirms its applicability to a wide range of real-world scenarios in which extensive simulations are needed under diverse settings.
[ "turbulent flow", "neural operator", "knowledge-guided machine learning", "sequential simulation" ]
Reject
https://openreview.net/pdf?id=5LvTfc4fBz
https://openreview.net/forum?id=5LvTfc4fBz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zeYXb2rVQZ", "wmARpMO9De", "pGfIJNzgt4", "gDxC31nRzr", "ffQjgAOFEY", "eSsW0yJcHT", "dep6hq4QhU", "d6ztWi6C8B", "YmD0FXKvWN", "U8iz7Mh3dE", "U0XrG0KhyJ", "SB8CngUMAb", "PTes3tDH0P", "JzQNNqToeN", "FQIE30BcAe", "CM5k5WQCN2", "7wfAQryxyi", "7cRhVfvXBY", "1xKS84x6b2" ], "note_type": [ "official_review", "meta_review", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730602384040, 1734561253329, 1730262151501, 1730685218975, 1732508472149, 1730573352520, 1732490374159, 1732489582014, 1732499092809, 1737523914495, 1730374524002, 1732492346570, 1732485911972, 1732496937216, 1732563141803, 1732583199565, 1732494560620, 1732528113498, 1730191712726 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_cPpW" ], [ "ICLR.cc/2025/Conference/Submission8511/Area_Chair_khZe" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_P9EC" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_c3wa" ], [ "ICLR.cc/2025/Conference/Submission8511/Area_Chair_khZe" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_yHQa" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_c3wa" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_iKBQ" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_yHQa" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_P9EC" ], [ "ICLR.cc/2025/Conference/Submission8511/Authors" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_iKBQ" ], [ "ICLR.cc/2025/Conference/Submission8511/Reviewer_gwQ8" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a Physics-Enhanced Neural Operator (PENO) model designed to improve the simulation of turbulent flows. PENO integrates physical knowledge of partial differential equations (PDEs) with a Fourier Neural Operator (FNO) framework, aiming to capture complex flow dynamics accurately. The model introduces a self-augmentation mechanism to maintain high-frequency information, which is often lost in traditional FNO models. Evaluations on multiple turbulent flow datasets show that PENO effectively reconstructs high-resolution DNS data, generalizes across resolutions, and mitigates error accumulation in long-term simulations. This demonstrates PENO's applicability to various scientific and engineering fields where extensive, high-resolution simulations are essential but computationally expensive.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Integrates physics-based knowledge directly into the neural operator framework, enhancing the ability to capture complex flow patterns that purely data-driven models often miss.\", \"Demonstrates high generalizability across various datasets and spatial resolutions, showing adaptability to different turbulent flow conditions.\", \"Introduces a self-augmentation mechanism that preserves high-frequency flow patterns over time, essential for maintaining accuracy in simulations that run for extended periods.\", \"Outperforms existing models in structural similarity index measure (SSIM) and dissipation difference metrics, indicating superior performance in both spatial accuracy and gradient capturing ability.\"], \"weaknesses\": [\"Relies on large eddy simulation (LES) data for optimal results, which, while cheaper than DNS data, still adds to the data requirements and may not always be available.\", \"Risk of overfitting in long-term simulations, particularly when adjusting random Gaussian perturbation parameters, as extensive tuning is needed to balance accuracy and robustness.\", \"Challenges with maintaining accuracy in extremely high-dimensional or fine-grained simulations, where capturing all necessary dynamics requires considerable computational power and data.\", \"Not sure if the pressure predictions also are of high quality.\"], \"questions\": \"1. What about the error in the predicted pressures? It has been known that it is more dificult to predict the pressures using neural operators.\\n2. How easy would it be to extend the approach to non rectilinear domains. One of the main limitations of FNO is that they are suitable only for rectilinear domains.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces the Physics-Enhanced Neural Operator (PENO), a novel approach that builds upon Fourier Neural Operators (FNOs) to emulate turbulent flows. PENO aims to overcome the limitations of traditional FNOs, including limited generalization, reduced accuracy from neglecting underlying physics, and the inability to capture high-frequency details. PENO attempts to improve upon standard FNOs by incorporating physical knowledge of the underlying PDEs. This is done through a self-augmentation mechanism that the authors claim it retains high-frequency information by integrating \\\"PDE-enhancement branches\\\" to predict the flow evolution using numerical methods plus a super-resolution which is then weighted with a regular FNO branch. The method is evaluated on two datasets: Forced Isotropic Turbulence (FIT) and Taylor-Green Vortex (TGV) flow.\\n\\nReviewers found the approach promising but raised several concerns. These included the limited length of the demonstrated rollouts, the need for clarification on certain aspects of the method, and comparisons with other approaches, as some important related literature seemed to be absent. Concerns were also raised regarding the rationale for using both LES and DNS inputs, which can diverge due to the chaotic nature of the problem. Additionally, the metrics used are not standard in CFD, making it difficult to gauge the method's behavior and advantages. Several ablation studies are missing, making the relative importance of each innovation to the overall performance unclear. The authors provided only a brief response during the rebuttal period, failing to address the reviewers' concerns or provide an updated manuscript. As such, I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Some of the reviewers, raised the issue that the paper seems oblivious to a large portion of literature, that the benchmarks are lacking, and the evaluation metrics are usually for computer vision and not for the target domain. The authors provided a succinct response but provided no modifications to the manuscript.\"}", "{\"summary\": \"This paper proposes to embed PDE-based constraints into FNOs for simulating fluids. Specifically, the paper targets turbulent transport, and long-term unrolling, i.e., many repeated invocations of the trained prediction network. FNOs have inherent known limitations in terms of the frequency spectrum that they can process. Here, the FNO blocks are augmented with \\u201cPDE branches\\u201d that predict the evolution the flow with a regular numerical method.\\n\\nTo prevent loosing smaller frequencies, the authors propose to predict two versions in parallel (low and high resolution), that are merged to produce an output that should better reflect the frequency distribution of the targets.\\n\\nThe method is evaluated on an isotropic turbulence case, and a taylor-green vortex. The authors also share their implementation at submission time, which is neat to see. Unfortunately, only very short rollouts of 10 to 20 steps are shown. For \\u201creal\\u201d applications in turbulence this seems very short.\", \"two_recommendations_regarding_the_presentation\": \"Fig. 4, does not really show the important part, the high frequencies. As often done in fluids papers, I can recommend rescaling by wavenumber (optionally with an exponent >1), or to provide some metrics on how much the frequency content improves.\\n\\nWhy give separate values for each channel in Table 1? You could simply mention that SRGAN has difficulties with the z-component, but it\\u2019s usually taken for granted that a \\u201cproper\\u201d architecture can handle multiple output channels.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper has several interesting aspects:\", \"It targets important and challenging scenarios\", \"Improvements over regular FNOs are demonstrated\", \"The submission comes with three-dimensional test cases\", \"A nice range of \\u201cpure prediction\\u201d baselines are included\"], \"weaknesses\": [\"One the negative side\", \"Effectively only 2 test scenarios are evaluated\", \"The metrics for evaluating the quality of the turbulent outputs are \\u201cunusual\\u201d, why not employ common metrics, starting with MSE, over energy spectra, TKE etc.?\", \"Section 3.2 is fairly unclear, Q-tilde and Q-hat are not used in any equations\", \"The length of the rollouts is extremely short. Long-term stability is not evaluated.\", \"Maybe the largest problem I see with this submission is the omission of previous solver-in-the-loop approaches. These have a very similar goal to improve a coarse simulated baseline (here the evolution of eq. 1) with a learned model. The paper neither cites or discusses any of the related works, Um\\u20192020, Kochkov\\u20192021 etc. As far as I understand, the method in 3.1 corresponds exactly to a learned correction setup with an FNO as architecture. I still see merit in the subtler differences, and 3.2 seems to be non-standard (albeit a bit unclear), but these parallels should be made clear from the beginning. Similarly, the discussion of new aspects should put the work in context to those previous papers.\", \"As many point were left open after the rebuttal and discussion, I don't recommend to accept the paper to ICLR in its current form. I think there are quite a few important open questions to address before publication.\"], \"questions\": \"Please comment on similarities with solver-in-the-loop approaches, and clarify 3.2, as outlined above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is focused on improving the Fourier Neural Operator, incorporating physical knowledge and making some architectural improvements in order to better simulate turbulent flow. By adding physics knowledge, adding an extra network branch, and adding high-frequency signals at each time step, the method is designed to outperform existing techniques for this application. The authors' PENO method is evaluated on well-known datasets / test problems.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is written well, and the figure quality is high.\\n\\n2. The paper makes several novel (from what I can tell) modifications to the existing FNO algorithm, and it appears that these all turn out to be improvements.\\n\\n3. Results seem quite positive: it looks like the PENO method outperforms the target baselines exactly as hoped by the authors.\", \"weaknesses\": \"1. It is not clear how much each of the various improvements contributes to the overall success of the method. For example, if you just use FNO with the additional self-augmentation mechanism or just with the additional physics data, how well do things perform compared to what is ultimately the PENO method?\\n\\n2. Fig. 6 is somewhat helpful, but it might be more informative to instead show the DNS result, and then just plot the errors of |method - DNS| for each method? In particular, since you're using SSIM, readers may naturally want to look at the spatial distributions of errors of the various methods, not just the magnitudes, so that could be helpful.\\n\\n3. I think there are some missing details around training: what optimization algorithm was used, what learning rates, how long did training take (and ideally how does this compare to other methods like FNO), what do the training and validation loss curves look like (so we can see how much overfitting may be happening), etc.\", \"minor\": \"\\\"In the field of computational fluid dynamics (CFD),\\\" just say \\\"In CFD,\\\" here\", \"questions\": \"1. Table 1: Why were SSIM and dissipation difference the chosen metrics here? This could be informative for readers to understand.\\n\\n1. Could the authors speak at all to the generalizability of the PENO technique to problems besides turbulent flow? How do the authors imagine other researchers building upon and extending the present work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewers' Response\", \"comment\": \"Dear Reviewers,\\n\\nAs the author-reviewer discussion period is approaching its end, I would strongly encourage you to read the authors' responses and acknowledge them, while also checking if your questions/concerns have been appropriately addressed.\\n\\nThis is a crucial step, as it ensures that both reviewers and authors are on the same page, and it also helps us to put your recommendation in perspective.\\n\\nThank you again for your time and expertise.\\n\\nBest,\\n\\nAC\"}", "{\"summary\": \"The authors provide an interesting approach to Physics-informed ML. However I cannot recommend this paper for acceptance unless (i) the paper can clarify some missing information and (ii) additional information is provded on the model comparison with other approaches (see questions). This is nonetheless promising work -- hopefully the authors can address these shortcomings in paper presentation in the rebuttal.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. Good motivation for neural operator work -- surrogates for DNS are a major research focus.\\n2. Good test cases GIT, FIT and Taylor-Green are well accepted configs. in the physics community.\\n3. Potentially promising results.\", \"weaknesses\": \"1. Quantitative evaluation in table 2 is missing model parameters/FLOPs to discount scaling effects on architecture comparison.\\n2. More information on training data could be provided in appendix. See question 4.\", \"questions\": \"1. Line 76 -- Are Neural Operators really that efficient compared to other deep learning approaches? Chung et al (NeurIPS 2023) showed that Fourier neural operators in turbulent flow applications can have large matrix dimensionalities that result in poor scaling behavior when compared to other NN architectures.\\n2. Figure 2 -- How does it make sense to have both DNS and LES data at time t as inputs? After an initial condition, LES and DNS behavior would drift from each other since the missing physics in LES would change the trajectory of the simulation. There is an ablation study to show minor improves from LES inputs. But some clarification on fundamental mechanism would be useful for readers.\\n3. Table 2 -- what's the number of parameters/FLOPs of the different architectures. Model scales can affect predictive performance -- see Chung et al (NeurIPS 2023).\\n4. What numerical solver for data? What is the spatial and temporal differencing scheme? Are flow compressible or incompressible? What is the resolution of DNS w.r.t Kolmogorov lengthscales?\\n5. What are the exact numbers of training and test samples used for each case? A table in Appendix could be useful for clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for engaging with the reviewers. I believe that incorporating points like your responses to C4 and C5, and including the new (or newly proposed) results regarding C1 and C2 into the paper, will be quality improvements for the manuscript. These aren't major changes (I had expected the authors would be able to address them), and so in my view, my score will remain the same. I will monitor other changes made and discussions with other reviewers to see if further reevaluation is warranted.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\\n\\nC1 Risk of overfitting in long-term simulations, particularly when adjusting random Gaussian perturbation parameters, as extensive tuning is needed to balance accuracy and robustness.\\n\\nI don't think PENO requires extensive additional tuning. The reasons are as follows: (1) As shown in Tables 1 and 2, when comparing PENO, PENO_SR (without Gaussian perturbation), and PENO_SA, the upscaling step provides a significantly greater improvement than the Gaussian perturbation. (2) As shown in Table 5 in the appendix, varying the parameters of the Gaussian perturbation only causes the performance to fluctuate within a very narrow range. This indicates that the model's sensitivity to these parameters is minimal, reducing the need for extensive tuning.\\n\\nC2 Challenges with maintaining accuracy in extremely high-dimensional or fine-grained simulations, where capturing all necessary dynamics requires considerable computational power and data.\\n\\nI agree with your comments that achieving accuracy in such simulations does require additional computational power and data. However, our primary goal is sequential prediction rather than super-resolution. We assume that a portion of high-resolution data is available at the beginning of the simulation to train the model for capturing the physical pattern. Using this initial data, the model can effectively generate predictions for future time periods. Additionally, we have the option to incorporate LES data to support the simulation process, which can help achieve better results. \\n\\nC3 What about the error in the predicted pressures? It has been known that it is more difficult to predict the pressures using neural operators.\\n\\nWe use pressure values solely to support the generation of velocity data in the PDE-enhancement branch. In our setup, we do not predict pressure values.\\n\\nC4 How easy would it be to extend the approach to non rectilinear domains? One of the main limitations of FNO is that they are suitable only for rectilinear domains.\\n\\nPENO can extend to non-rectilinear domains. One potential approach involves coordinate mapping, where the irregular domain is transformed into a rectilinear grid using a mapping function (e.g., curvilinear coordinates). The PENO operates in the transformed space, and the results are mapped back to the original domain. Another approach replaces the global Fourier transform with pseudo-spectral methods that use localized basis functions, such as Chebyshev polynomials or wavelets, to approximate the spectral representation on irregular grids.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\\n\\nC1. At L141, it includes a transformation from the spatial domain to the Fourier domain for at time . Is the integration domain for the Fourier transformation to be a for a fixed time step, or is it for multiple time steps? The Fourier transform is written through integrating on , but I believe that the authors meant integration on the Fourier domain. Consider writing these variables differently for better exposition. \\n\\nI acknowledge that it was our mistake to use the wrong symbol to represent tt. We will correct this in the final copyrighted version of the paper.\\n\\nC2. At L209 the authors parametrize the non-linear function that represents the temporal derivative of through a parameter . Why is this relevant? I could not find where this parameter was referenced in other places of the manuscript. \\n\\nThe symbol theta mentioned in line 209 does not represent the model's parameters. Instead, it is used as a general notation to describe the parameters contained in the Navier-Stokes (NS) equation.\\n\\nC3. Equation 1 denotes the evolution of an arbitrary variable Why not write it only for velocity? If ones considers the Navier Stokes equation with regards to the vorticity, different formulations have to be considered (e.g., vorticity-streamfunction vs vorticity-velocity formulations), alongside with potential complications on the definition of boundary conditions.\\n\\nWe aim to show that the PDE-enhancement branch is not limited to the Navier-Stokes (NS) equation with respect to velocity but can also be applied to other variables. The only requirement is to adapt the branch to the corresponding PDE formulation for those variables.\\n\\nC4 Why the authors didn\\u2019t include measuring the MSE in Tables 1 and 2? There seems to have space to do so, and it would benefit the quantitative analysis of the paper.\\n\\nThe evaluation metric, dissipation differences, serves a similar function to MSE. It calculates the pixel-wise difference between the generated DNS data and the real DNS data. Moreover, compared to MSE, dissipation differences can more effectively show the dynamic change between each pair of pixels within DNS flow data. More details on the explanation of dissipation differences can be found in the last paragraph of Section 4.1.\\n\\nTo better clarification of the advantages of the proposed PENO method, we have provided a new quantitative analysis based on the MSE values in the FIT and datasets. I include my proposed PENO-based methods and selected baselines in these experiments. The results of FIT are:\\n\\nDCS/MS (0.158, 0.159, 0.158) \\nFSR (0.151, 0.153, 0.151) \\nCTN (0.123, 0.121, 0.123) \\nFNO (0.072, 0.071, 0.070) \\nPRU (0.065, 0.063, 0.063) \\nPENO (0.049, 0.048, 0.049) \\nPENOSR (0.038, 0.037, 0.038) \\nPENOSA (0.035, 0.034, 0.034)\", \"the_results_of_tgv_are\": \"DCS/MS (0.069, 0.070, 0.069) \\nFSR (0.062, 0.063. 0.062) \\nCTN (0.088, 0.089, 0. 088) \\nFNO (0.068, 0.069, 0.068) \\nPRU (0.039, 0.041, 0.040) \\nPENO (0.037, 0.038, 0.037) \\nPENOSR (0.029, 0.030, 0.029) \\nPENOSA (0.025, 0.026, 0.026) \\n\\nBased on the table, we can easily observe that the proposed PENO+ method can significantly outperform baselines, and the effectiveness of each proposed component in the proposed method can be justified by comparisons amongst FNO, PRU, and PENO-based methods. In addition, we will include this supplementary experiment results in the appendix of the copyright version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"Direct numerical simulations of turbulent flows can be prohibitively expensive to carry out. Fully data-driven or hybrid physics-based machine learning models can be quite promising in reducing the turnaround times for reconstructing fine scale data from coarse grained simulations or long time prediction of flow-field given historical DNS data. Motivated by the potential benefits of fourier neural operators in handling complex spatio-temporal data, the authors propose a physics enhanced neural operator method to model complex flow-field dynamics.\\nWhile FNOs work in a purely data driven fashion, the PENO in addition to data, also leverages the physics knowledge in the form of the underlying governing PDEs of turbulent flows. The authors also introduce a self-augmentation technique to enable long-time simulations/roll-outs. They demonstrate the model's capability on different turbulent flow datasets and test across different resolutions as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem definition is clear with regard to the PENO being trained under a forecasting objective satisfying the physics constraints.\\nInstead of having a single network satisfy both data-driven and physics-based constraints, PENO has two branches, one FNO and the other physics based PDE branch. The final prediction / forecast is a weighted combination of the outputs of both branches. Instead of using continuous derivatives, the authors use temporal and spatial discretisation to estimate the gradient terms in the governing equations. This is beneficial as it reduces the load on the neural network to strongly learn the continuous derivatives in the presence of only sparse data.\\nThe authors clearly describe their methodology, datasets used, training and testing protocol.\\nThey provide results validating their method across different benchmarks and other data-driven models. \\nThe objective function is a simple forecasting MSE based loss function.This makes the learning easy and could prevent the competing objectives problem otherwise encountered in PINNs. Moreover, PENO allows multiple data sources to be combined as well such as DNS and LES through a weighted combination.\", \"weaknesses\": \"Although the authors claim novelty in the physics enhanced operator, the authors have seemed to ignore previous works on physics informed/enhanced operator learning in this domain. Physics informed neural operator: https://arxiv.org/pdf/2111.03794v3, Physics informed DeepONet https://ar5iv.labs.arxiv.org/html/2207.05748 and its derivatives. Instead of comparing their model with other operator learning frameworks, they infact compare with some of the super-resolution models which in some way seems out of context. It would have been better if the authors could have provided comparison with the other previously proposed Physics based operator learning frameworks which came out much earlier than this work. While the contours look decent in the results, comparison of spatial and temporal spectrum would be worthwhile in showing if the model is capable of overcoming spectral bias that otherwise plagues neural networks in general. It is not clear if PENO can operate in a purely physics based training regime without the data-driven component as in PINO.\", \"questions\": \"1. Could you elaborate on the key differences between PENO and Physics informed neural Operator?\\n2. Is there any reason to not compare your model with other operator learning based frameworks and not include them in the survey or in this study?\\n3. Could you explain why temporal or spatial spectrum of the flow-fields have not been included in the evaluation? Any justification for why dissipation different has been considered?\\n4. Can PENO be used in a fully data-free regime as in the PINO paper? Where given only the initial and boundary conditions, can the model be trained to simulate turbulent flow?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\\n\\nC1. Quantitative evaluation in table 2 is missing model parameters/FLOPs to discount scaling effects on architecture comparison.\", \"the_total_parameters_of_the_models_are_as_follows\": \"FNO (62,337), PRU (84,938), and PENO (151,284). SR methods typically have over a million parameters. Compared to SR-based methods, PENO remains efficient.\\n\\nC2. Line 76 -- Are Neural Operators really that efficient compared to other deep learning approaches? Chung et al (NeurIPS 2023) showed that Fourier neural operators in turbulent flow applications can have large matrix dimensionalities that result in poor scaling behavior when compared to other NN architectures. \\n\\nThe FNO requires less than 5 minutes for training and less than 10 seconds for inference. The PRU takes approximately 8 minutes for training and about 20 seconds for inference. PENO, on the other hand, takes around 10 minutes for training and about 30 seconds for inference. In contrast, other neural network architectures often require over an hour for training and several minutes for inference. This makes PENO significantly more efficient in terms of both training and inference time compared to other neural network architectures.\\n\\nC3. Figure 2 -- How does it make sense to have both DNS and LES data at time t as inputs? After an initial condition, LES and DNS \\nbehavior would drift from each other since the missing physics in LES would change the trajectory of the simulation. There is an ablation study to show minor improves from LES inputs. But some clarification on fundamental mechanism would be useful for readers.\\n\\nIn data-driven models, such as the one shown in Figure 2, error and bias can accumulate over time, especially in the later stages of simulation. While data-driven methods like FNO can effectively simulate data in the short term, their performance tends to degrade over longer time horizons as errors compound. On the other hand, although LES data has inherent biases due to missing physics, it serves as a valuable supplement for data simulation, particularly in longer-term scenarios. As demonstrated in Table 3 of the appendix, incorporating LES data can lead to notable improvements, especially during the later stages of the simulation. This combined approach leverages the strengths of both data-driven methods and LES to enhance overall accuracy of data simulation.\\n\\nC4. What numerical solver for data? What is the spatial and temporal differencing scheme? Are flow compressible or incompressible? What is the resolution of DNS w.r.t Kolmogorov lengthscales?\\n\\nBoth FIT and TGV are incompressible. The numerical solver used is a pseudo-spectral method. The spatial and temporal information is provided in Section B.1 of the dataset description.\\n\\nC5. What are the exact numbers of training and test samples used for each case? A table in Appendix could be useful for clarity.\\n\\nThe number of training and testing samples is described in the experimental design section on page 7. I will incorporate your suggestion to summarize this information in a table in the final copyrighted version of the paper.\"}", "{\"title\": \"Thank you for your reviews.\", \"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\", \"c1\": \"It is not clear how much each of the various improvements contributes to the overall success of the method. For example, if you just use FNO with the additional self-augmentation mechanism or just with the additional physics data, how well do things perform compared to what is ultimately the PENO method?\\n\\nI conducted new experiments with the FNO incorporating an additional self-augmentation mechanism. The quantitative performance metrics are as follows: SSIM (0.767, 0.761, 0.763) and Dissipation Difference (0.038, 0.039, 0.038). These results are worse than those achieved with the proposed PENO method.\\nIn Table 3 of the appendix, we provide the experimental results for FNO with additional LES data. The performance is worse compared to PENO without LES data.\", \"c2\": \"Fig. 6 is somewhat helpful, but it might be more informative to instead show the DNS result, and then just plot the errors of |method - DNS| for each method? In particular, since you're using SSIM, readers may naturally want to look at the spatial distributions of errors of the various methods, not just the magnitudes, so that could be helpful.\\n\\nWe will take your suggestions and include the figures of spatial distribution in the final copyrighted version of the paper.\", \"c3\": \"I think there are some missing details around training: what optimization algorithm was used, what learning rates, how long did training take (and ideally how does this compare to other methods like FNO), what do the training and validation loss curves look like (so we can see how much overfitting may be happening), etc.\\n\\nThe implementation details are provided in appendix B.2.\", \"c4\": \"Table 1: Why were SSIM and dissipation difference the chosen metrics here? This could be informative for readers to understand.\\n\\nThe reason we selected SSIM is that it is highly sensitive to structural information, making it a suitable metric for evaluating the overall structural quality of generated data. Additionally, in computational fluid dynamics, researchers not only focus on the overall structure but also pay close attention to spatial variations in detail. Therefore, we use Dissipation Difference, a commonly used metric in this field, as another standard.\", \"c5\": \"Could the authors speak at all to the generalizability of the PENO technique to problems besides turbulent flow? How do the authors imagine other researchers building upon and extending the present work?\\n\\nPENO is fundamentally a Partial Differential Equation (PDE) solver, designed to handle systems governed by PDEs. While this work focuses on turbulent flow\\u2014a highly complex and challenging case in fluid dynamics\\u2014PENO's applicability is not restricted to this specific type of flow. It can also be effectively applied to simulate other types of fluid flows, such as water flow, atmospheric flow, and other scenarios commonly encountered in fluid dynamics.\\n\\nThe versatility of PENO extends beyond fluid dynamics. As a general PDE solver, it has the potential to be utilized in a wide range of dynamical systems that are guided by PDEs. These systems could include, but are not limited to, heat transfer, electromagnetism, and chemical reaction-diffusion processes. By adapting the solver to the specific equations and physical constraints of a given system, researchers can leverage PENO to address challenges in various scientific and engineering domains.\"}", "{\"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\\n\\nC1. The metrics for evaluating the quality of the turbulent outputs are \\u201cunusual\\u201d, why not employ common metrics, starting with MSE, over energy spectra, TKE etc.? \\n\\nThe evaluation metric, dissipation differences, serves a similar purpose to MSE by calculating the pixel-wise difference between the generated DNS data and the real DNS data. However, compared to MSE, dissipation differences are more effective and sensitive in capturing the dynamic changes between each pair of pixels within the fine-grained DNS flow data. MSE, on the other hand, cannot adequately capture these dynamics. For example, as shown in FNO's results in the figure, the output flow data appears smooth, yet the MSE remains very low, failing to reflect the complex variations in the flow.\\nIn addition, we will take your suggestions and include the figures of temporal or spatial spectrum of the flow-fields in the final copyrighted version of the paper.\\n\\nC2. Section 3.2 is fairly unclear, Q-tilde and Q-hat are not used in any equations\\n\\nQ-hat is defined on line 187 in Section 3.1, while Q-tilde is defined on line 283 in Section 3.2.\\n\\nC3. Please comment on similarities with solver-in-the-loop approaches\\n\\nThe central idea of solver-in-the-loop approaches is to iteratively update the solver to enhance performance. However, our proposed method takes a different approach. FNO has significant limitations in simulating 3D turbulent flows, and even with iterative updates, it cannot achieve satisfactory simulation performance. \\nOur method introduces a PDE-enhancement branch and a self-augmentation mechanism. These supplementary modules are specifically designed to overcome the weaknesses of FNO and significantly improve the overall performance of the model.\"}", "{\"comment\": \"Thank you for addressing some of my concerns.\\n\\n1. and 2. Thanks for doing the param calculations, but what are the theoretical FLOPs values of the different architectures studied in this paper. A clearer presentation of this can help readers.\\n\\n3. This is sufficient discussion. Thank you. Please add this to revised paper, if not already present.,\\n\\n4. Kolmogorov lengthscales are still not mentioned.\\n\\n5. Number of train and test samples are still not explicitly mentioned on pg 7.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"I thank the authors for their updates and comments.\\n\\nTo me it's still unclear where the difference to a solver-in-the-loop approach is. Doesn't the Kochkov et al. paper exactly target forced isotropic turbulence cases. The main difference seems replacing the CNN there, with an FNO here. That being said, the Kochkov CNN version seemed to be very stable, and produce rollouts for thousands of time steps.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We appreciate your time and efforts in providing insightful comments. Below, we tried our best to address your concerns.\\n\\nC1. Could you elaborate on the key differences between PENO and Physics informed neural Operator?\\n\\nThe major difference between our proposed PENO and PINO lies in how they incorporate the FNO benchmark and utilize PDE information. PINO uses PDEs as a physics-informed loss to regularize the model during training. In contrast, our proposed PENO integrates a PDE-enhancement branch directly into the model architecture, embedding the PDE format into the structure itself. This structural integration allows PENO to inherently encode the physical principles, rather than relying solely on loss-based regularization.\\nIncorporating PDEs directly into the model structure, rather than as a loss function, offers significant advantages grounded in KGML theory. By embedding PDEs into the model architecture, the physical laws become an integral part of the representation, ensuring that the model inherently adheres to these principles throughout training and inference. This structural integration reduces the reliance on large datasets and mitigates the risk of overfitting to noise, as the encoded physics acts as a strong prior. In contrast, using PDEs as a loss function primarily guides the optimization process but does not guarantee strict adherence to physical laws in the learned representations, it is more unstable.\\n\\nC2 Is there any reason to not compare your model with other operator learning based frameworks and not include them in the survey or in this study?\\n\\nWe provide the following supplementary experimental results (left: SSIM, right: Dissipation diff) on both the FIT and TGV datasets for your reference. We will also include these results in the final copyrighted version of the paper.\", \"fit_dataset\": \"DeepOnet (0.909, 0.911, 0.909) (0.156, 0.154, 0.155), \\nFNO (0.912, 0.915, 0.911) (0.153, 0.151, 0.150),\\nPiDeepOnet (0.923, 0.923, 0.924) (0.143, 0.142, 0.142),\\nPINO (0.931, 0.933, 0.934) (0.134, 0.133 0.133),\\nPENO (0.968, 0.972, 0.967) (0.110, 0.107, 0.110)\\n\\nTGV dataset\\nDeepOnet (0.641, 0.638, 0.642) (0.075, 0.074, 0.074),\\nFNO (0.645, 0.646, 0.648) (0.072, 0.071, 0.072),\\nPiDeepOnet (0.705, 0.706, 0.705) (0.043, 0.042, 0.043),\\nPINO (0.723, 0.722, 0.721) (0.040, 0.042, 0.040),\\nPENO (0.843, 0.847, 0.844) (0.032, 0.033, 0.034)\\n\\nFrom the above results, we can easily find that our proposed PENO outperforms all of these neural operator based methods. \\n\\nC3. Could you explain why temporal or spatial spectrum of the flow-fields have not been included in the evaluation? Any justification for why dissipation different has been considered?\\n\\nWe will take your suggestions and include the figures of temporal or spatial spectrum of the flow-fields in the final copyrighted version of the paper.\\n\\nC4. Can PENO be used in a fully data-free regime as in the PINO paper? Where given only the initial and boundary conditions, can the model be trained to simulate turbulent flow?\\n\\nYes, this is included in the transferability experimental results shown in Figure 8 and Section 4.3 on page 10.\"}", "{\"title\": \"Acknowledgement of author response and additional comments.\", \"comment\": \"Dear Authors,\\n\\nThanks for addressing my questions. \\n\\n**Comparison with other operator learning frameworks**\\n1. The performance comparison of the operator learning methods on FIT and TGV datasets indeed indicate that PENO outperforms the rest. \\n2. It would also be better to bring forth the key features/differences between these methods and PENO in the literature/ methodology section to highlight your contributions better. \\n3. Spatial/spectrum plots would highlight the reconstruction performance of the model better. \\n\\n**Transferrability/ forward problems**\\n1. If I understand corectly, the transferrability experiment is carried out over a trained model but with or without fine tuning for different PDEs than the one used to train the PENO in the first place. \\n2. Training of PENO still requires data. \\n3. However, I would be interested to know whether given the PDE structure in the PENO architecture, is it possible to train PENO to solve a forward problem with just initial and boundary conditions available while training. Basically, can PENO function as a CFD solver like as Deep Galerkin Methods/ Deep Ritz Methods/PINNs? \\n\\nThanks\"}", "{\"summary\": \"This paper proposes a physics enhanced neural operator (PENO) for regressing Direct Numerical Simulation (DNS) states to accurately model turbulent flows. The proposed pipeline consists of a next-step neural predictor that takes an input state of the solver, consisting of a combination of DNS and Large Eddy Simulation (LES) variables (velocity/vorticity/pressure) at a given time step $t$. Then the PENO method predicts the next time step; this can be done recursively to predict a series of steps, effectively advancing the simulation without requiring a numerical solver. The proposed approach is based on Fourier Neural Operators (FNOs), which approximate the Navier-Stokes Partial Differential Equation (PDE) solution through a transformation of the input variables to Fourier space. However, trained FNOs with mean squared error losses have two shortcomings: limited generalization and reduced accuracy due ignorance of the underlying PDE, and impaired ability to capture high-frequency information of the learned dataset. The authors tackle these issues by estimating temporal gradients through direct assessment of the underlying PDE and through augmenting the regression with an additional network branch that performs super-resolution to capture high-frequency details. The method is trained and tested in two datasets: the Forced Isotropic Turbulent flow (FIT) and the Taylor-Green vortex (TGV) flow, and the authors present some quantitative and qualitative analysis comparing the proposed approach against previous methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors correctly identify issues with the known spectra bias in networks trained with MSE losses. These approaches tend to fail capturing crucial high-frequency information of the learned dataset. The proposed architecture is compared against previous baselines, and it shows interesting qualitative evaluations for next time step DNS prediction. Lastly, using the PDE information during training is an effective way to improve accuracy and generalization.\", \"weaknesses\": \"My biggest concern with papers that propose to replace solvers is their usual lack of fair comparison to a well implemented physics solver running on similar hardware (GPU) (for a thorough analysis check \\\"Weak baselines and reporting biases lead to overoptimism in\\nmachine learning for fluid-related partial differential equations\\\"). A solver is a computational graph that can accurately predict the next state without the usual issues of data-driven approaches (lack of generalization, reduced accuracy). Without a clear intuition why a neural approach is able to make such a computational graph more efficient (e.g., reducing the dimensionality of the input through model reduction), it\\u2019s hard to justify why such a method is useful in the first place. This is the case for this paper, as the authors didn\\u2019t provide a fair comparison between their method and a DNS solver. The authors could include a comparison of computational efficiency and accuracy between their method and a state-of-the-art DNS solver implemented on similar hardware for better assessing the efficiency of such proposed approach. Additionally, the authors could also more explicitly discuss the potential advantages of their neural approach over traditional solvers.\\n\\nThe paper also could have done a better job of evaluating errors in time, which are one of the common issues of regressing simulations with neural approaches, as this issue is not present in traditional solvers. Neural approaches that integrate in time tend to highly diverge relative to a ground truth solver, check \\u201cStability analysis of chaotic systems from data\\u201d and \\u201cHow Temporal Unrolling Supports Neural Physics Simulators\\u201d for further references. Furthermore, the authors could include a more detailed analysis of how prediction errors accumulate over time, showing more precise error metrics (apart from the ones presented in Figure 5) for different prediction horizons. You could also suggest they discuss how their method addresses (or doesn't address) the issue of divergence over time compared to traditional solvers.\\n\\nMoreover, the chosen network architecture/approach for the regression of the next time step is outdated, since currently many other authors are now relying on the power of diffusion models to properly capture the complex behavior of fluid simulations. The authors even add a bit of noise to \\u201cimprove generalization\\u201d, but it\\u2019s stated in the paper that a careful evaluation of such an approach is needed. I would suggest that just implementing a straightforward conditioned diffusion model would be a better solution. Otherwise, the authors could also compare their method against a state-of-the-art diffusion model approach, explaining why they chose their current architecture over more recent alternatives. The two-stage network solution also seems a bit cumbersome, and I suspect one could do the regression in a single-step. Perhaps the authors could do a better job at justifying why this approach was chosen over a single-step regression.\\n\\nThe manuscript, in its current version, also contains several exposition issues. Several typos (e.g, L025: \\u201c. we further\\u201d, L030: \\u201cresults confirms\\u201d, L082: \\u201ctraining data are scarce\\u201d, L144: \\u201cdenotes\\u201d, L174: \\u201cfiltration\\u201d -> filtering, L223 (Figure 3): \\u201cNaiver Stoke\\u201d, L241: \\u201cappendix\\u201d, L275: \\u201cdo not require\\u201d, L278: \\u201c, We create output\\u201d, L341: \\u201c5, 024\\u201d, L478: \\u201c. the simulated\\u201d, L494: \\u201cfaces increased\\u201d are present. These errors reduce readability and can be confusing to the reader. \\n\\nLastly, the authors miss out several important references that are relevant to this work: The following papers are of direct relevance to the current submission: \\u201cBenchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation\\u201d, \\u201cUncertainty-aware Surrogate Models for Airfoil Flow Simulations with Denoising Diffusion Probabilistic Models\\u201d, \\u201cUnsteady Cylinder Wakes from Arbitrary Bodies with Differentiable Physics-Assisted Neural Network\\u201d and \\u201cHow Temporal Unrolling Supports Neural Physics Simulators\\u201d.\\n\\nThe aforementioned weaknesses and questions present in the next section justify my score for this paper.\", \"questions\": [\"At L141, it is included a transformation from the spatial domain to the Fourier domain for $\\\\mathbf{Q}(t)$ at time $t$. Is the integration domain for the Fourier transformation to be a for a fixed time step, or is it for multiple time steps? The Fourier transform is written through integrating on $dt$, but I believe that the authors meant integration on the Fourier domain. Consider writing these variables differently for better exposition.\", \"At L209 the authors parametrize the non-linear function that represents the temporal derivative of $\\\\mathbf{Q}$ through a parameter $\\\\theta$. Why is this relevant? I could not find where this parameter was referenced in other places of the manuscript.\", \"Equation 1 denotes the evolution of an arbitrary variable $\\\\mathbf{Q}$ when $\\\\mathbf{Q} = \\\\vec{v}$, where $\\\\vec{v}$ is the fluid velocity. Why not write it only for velocity? If ones considers the Navier Stokes equation with regards to the vorticity, different formulations have to be considered (e.g., vorticity-streamfunction vs vorticity-velocity formulations), alongside with potential complications on the definition of boundary conditions.\", \"Why the authors didn\\u2019t include measuring the MSE in Tables 1 and 2? There seems to have space to do so, and it would benefit the quantitative analysis of the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
5LXcoDtNyq
Holistic Reasoning with Long-Context LMs: A Benchmark for Database Operations on Massive Textual Data
[ "Seiji Maekawa", "Hayate Iso", "Nikita Bhutani" ]
The rapid increase in textual information means we need more efficient methods to sift through, organize, and understand it all. While retrieval-augmented generation (RAG) models excel in accessing information from large document collections, they struggle with complex tasks that require aggregation and reasoning over information spanning across multiple documents--what we call \textit{holistic reasoning}. Long-context language models (LCLMs) have great potential for managing large-scale documents, but their holistic reasoning capabilities remain unclear. In this work, we introduce HoloBench, a novel framework that brings database reasoning operations into text-based contexts, making it easier to systematically evaluate how LCLMs handle holistic reasoning across large documents. Our approach adjusts key factors such as context length, information density, distribution of information, and query complexity to evaluate LCLMs comprehensively. Our experiments show that the amount of information in the context has a bigger influence on LCLM performance than the actual context length. Furthermore, the complexity of queries affects performance more than the amount of information, particularly for different types of queries. Interestingly, queries that involve finding maximum or minimum values are easier for LCLMs and are less affected by context length, even though they pose challenges for RAG systems. However, tasks requiring the aggregation of multiple pieces of information show a noticeable drop in accuracy as context length increases. Additionally, we find that while grouping relevant information generally improves performance, the optimal positioning varies across models. Our findings surface both the advancements and the ongoing challenges in achieving a holistic understanding of long contexts. These can guide future developments in LCLMs and set the stage for creating more robust language models for real-world applications.
[ "long-context", "reasoning", "LLM" ]
Accept (Poster)
https://openreview.net/pdf?id=5LXcoDtNyq
https://openreview.net/forum?id=5LXcoDtNyq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uLLwMPApeO", "gt6kvEsXqA", "dVYS2MmdeA", "ahg0Sb08ZC", "XwuGzfense", "VVj8cJkxLS", "QWNGrxJJvL", "QUdqpxhhgd", "PCQYEHXXPv", "OLnzpdfLxs", "LqvQdPXkzt", "JNRf2Cw4aa", "IovWqCErn6", "IUk141vszU", "I82YmGhVof", "H0Nhm3ZBeg", "DFOioybi4k" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730696508466, 1732750938852, 1732814547183, 1731975054376, 1731973915150, 1732639898100, 1733161860500, 1734751719721, 1731974985453, 1732751531692, 1730439974657, 1737523843513, 1731974200612, 1730718554970, 1730715647285, 1733249106530, 1732751287654 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7520/Reviewer_p5kH" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Reviewer_Epz7" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Area_Chair_yqgH" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Reviewer_Epz7" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Reviewer_KK9H" ], [ "ICLR.cc/2025/Conference/Submission7520/Reviewer_53mu" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ], [ "ICLR.cc/2025/Conference/Submission7520/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces HoloBench, a benchmark aimed at evaluating the holistic reasoning capabilities of Long-Context Language Models (LCLMs) when handling multi-document contexts. The framework aims to systematically assess LCLMs by controlling factors like context length, information density, distribution, and query complexity. The study uses a text-to-SQL dataset and corresponding databases to generate dataset for evaluating long-text language models. Based on proposed benchmark, the study shows that the quantity of information within the context more significantly impacts model performance than the pure length of the context. The findings provide insights into the strengths and limitations of LCLMs in processing large-scale text.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper offers a reasonable definition of the holistic reasoning capability of LLMs. The motivation of employing a text-to-SQL dataset to construct an evaluation framework for the holistic reasoning capabilities of Long-Context Language Models is reasonable.\\n2.The methodologies of benchmark construction are reasonable, for example, constructing data with varying difficulty levels and types based on the inherent difficulty classifications of the text-to-SQL dataset and the types of queries.\\n3.Given that the extractable information from databases can be freely controlled, the proposed Automated and Scalable Evaluation is practical.\\n4.The paper is written in a concise and coherent manner, offering illustrative figures that facilitate comprehension. The experiment results are clearly presented and easy to read.\", \"weaknesses\": \"1. Limited benchmark size:\\nAccording to Section 3.3, HoloBench consists of only 90 questions, which is significantly smaller in scale compared to other benchmarks like BABI Long and NOCHA. The small size of the benchmark makes it susceptible to randomness, compromising the reliability of the evaluation results.\\n\\n2. Lack of diversity and practicality:\\nSince each database table stores data of a similar type, the data constructed by HoloBench tends to be highly homogeneous and confined to a single scenario dictated by the original database. Compared to other benchmarks that use richly varied books as a base corpus for long-context tasks, HoloBench, built from individual databases within a single text-to-SQL dataset, lacks diversity and richness, offering limited scenarios.\\n\\n3. Limited challenge:\\nIn HoloBench, rows from relevant and irrelevant tables are merged as relevant and irrelevant contexts, respectively. However, data stored in different tables of a database typically exhibit noticeable differences, allowing LLMs to potentially distinguish between relevant and irrelevant contexts with ease. This suggests that the tasks provided by HoloBench may not be sufficiently challenging.\\n\\n4. Evaluation fairness and reliability:\\nHoloBench uses GPT-4-mini as its evaluation metric, which leads to potential uncontrollability and unfairness in evaluation. The human judgm\", \"questions\": \"Please answer the concerns in the above weaknesses section and the questions below:\\n\\n1. Selection of databases:\\nAccording to reference [1], Spider consists of 200 different databases, yet as stated in line 214 of the paper, HoloBench only employs five databases from it. What was the reason behind selecting these specific five databases? Would the findings apply consistently across other databases and other scenarios as well?\\n[1] Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task\\n\\n\\n2. Potential information leakage:\\nThe benchmark is constructed based on a publicly released text-to-SQL dataset from 2018, which might have already been included in the pre-training corpus of existing large language models. This could lead to potential information leakage, where LLMs might arrive at correct answers simply due to prior exposure to the data. Could the authors address this concern?\\n\\n3. Comparative analysis using long-context models:\\nIn Section 4.3, the authors utilize a document retriever that handles only 2k tokens while experimenting with data ranging from 4k to 64k tokens. This undermines the reliability of the performance comparison between LCLM and RAG. The authors should employ a RAG model that supports long-context processing to conduct this experiment accurately.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer KK9H,\\n\\nThank you again for reviewing our work! Please let us know if we have sufficiently addressed your questions and concerns. As the discussion deadline is approaching, we kindly request a prompt response and ask for a reconsideration of your evaluation score. Thank you for your support!\\n\\nWarm regards\"}", "{\"comment\": \"I appreciate the author's response to my reviews.\\nMy concerns have been addressed, I will increase the score.\"}", "{\"title\": \"Author Response (2/2)\", \"comment\": \"5. RAG comparison:\\nWe acknowledge that our RAG setup in the original paper could have been more comprehensive. To address this concern, we conducted additional extensive experiments that provide deeper insights into the relationship between retrieval size and model performance.\\n\\n## Additional Experiments with Varying Retrieval Sizes\\n\\nFirst, we investigated scenarios where half of the 64k context (i.e., 32k tokens) contains query-relevant information, examining performance trends across different retrieval sizes (2k, 4k, 8k, 16k, 32k). We used Llama-3.1-405b for these experiments.\\n\\nThe results demonstrate that performance peaks when the retrieval size (32k) matches the actual amount of relevant information in the context. This supports our paper's argument that reducing unnecessary information can enhance reasoning capabilities.\\n\\n### Table 1: Accuracy with information density = 0.5\\n\\n| # Retrieved Tokens | 2k | 4k | 8k | 16k | 32k | 64k (LCLMs with full context) |\\n|-------------------|------|------|------|-------|-------|---------------------------|\\n| RAG | 21.70 | 25.28 | 27.02 | 34.91 | 37.34 | 30.18 |\\n\\n## Impact of Information Density\\n\\nHowever, in real-world applications, the density of relevant information isn't known a priori. To investigate this, we conducted experiments with contexts where all 64k tokens are query-relevant. Note that due to different answer distributions, the absolute values between information density = 0.5 and 1.0 scenarios aren't directly comparable.\\n\\nThe results show that while using all 64k tokens achieved the best performance, the margin of improvement over 32k retrieval was surprisingly small. This suggests that current LCLMs still face limitations in processing ultra-long contexts, indicating significant room for improvement in their long-context reasoning capabilities.\\n\\n### Table 2: Accuracy with information density = 1.0\\n\\n| # Retrieved Tokens | 2k | 4k | 8k | 16k | 32k | 64k (LCLMs with full context) |\\n|-------------------|------|------|------|-------|-------|---------------------------|\\n| RAG | 14.85 | 19.18 | 21.79 | 29.36 | 38.69 | 39.06 |\\n\\n## Retrieval Quality Analysis\\n\\nWe further investigated whether RAG would be optimal if we knew the distribution of relevant information beforehand. In scenarios with 32k relevant information within 64k context, our analysis using bge-large-en-v1.5 (a state-of-the-art retrieval system) achieved:\\n\\n- **Precision**: 83.92%\\n- **Recall**: 80.89%\\n\\nThese results, while strong, highlight that even in ideal settings, retrieval errors remain significant.\\n\\n## Key findings:\\n\\n1. While RAG can outperform LCLMs when the optimal retrieval size is known (matching the amount of relevant information), this information is rarely available in practical settings\\n2. Even with state-of-the-art retrieval models (bge-large-en-v1.5), substantial retrieval errors persist (Precision: 83.92%, Recall: 80.89%), highlighting the inherent challenges in information selection\\n3. The practical deployment of long-context reasoning systems requires either:\\n - Development of more adaptive and robust retrieval mechanisms that can dynamically adjust to varying information densities, or\\n - Enhancement of LCLMs' capabilities to process and reason over ultra-long contexts more effectively\\n\\nThese findings suggest that the challenges in long-context reasoning cannot be solved by simply increasing retrieval size or context length, but rather require fundamental improvements in both retrieval quality and model capabilities.\\n\\nThank you for your valuable feedback!\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your positive feedback about our benchmark's potential value to the community and the reproducibility of our work.\\n\\n> How to differentiate b/w long but simple questions and genuinely complex ones\\n\\nSince we manually create questions based on SQL queries, the more complex the SQL query, the more conditions it contains, leading to longer questions. So, longer questions generally tend to be more complex questions.\\n\\nTo quantify this, we calculated the average number of words per question for each level of SQL query difficulty. The results are as follows:\\n\\n| SQL Difficulty | Average Question Length (words) |\\n|:--------------:|--------------------------------:|\\n| Easy | 10.8 |\\n| Medium | 13.8 |\\n| Hard | 17.0 |\", \"this_confirms_a_trend_within_our_dataset\": \"questions derived from more complex queries tend to be longer.\\n\\nIf we have misunderstood your point, please let us know.\"}", "{\"title\": \"General Response\", \"comment\": [\"We sincerely appreciate your time and constructive feedback, which have greatly enhanced the quality of our work. In response to your suggestions, we have uploaded a revised version of our paper, with all the corrections highlighted in blue for your convenience. Below is a summary of the key revisions:\", \"Appendix A.1.2: We have detailed our methodology for selecting diverse databases and provided comprehensive statistics for each.\", \"Appendix A.2.2: We included a discussion on query length to quantify how longer questions tend to be more complex.\", \"Appendix A.4.5: We offered deeper insights into the relationship between RAG and LCLMs.\", \"As the rebuttal phase concludes, we would greatly appreciate it if you could update your review based on these revisions.\"]}", "{\"comment\": \"Dear Reviewer Epz7,\\n\\nWe are grateful for your positive response to our revisions and the updated evaluation. Your detailed feedback has been extremely valuable in refining our paper.\"}", "{\"metareview\": \"In this paper, the authors propose a benchmark for evaluating the capabilities of long-context language models. Reviewers consider this a valuable contribution to the community, given the release of the associated code and data, which facilitate comparisons between long-context LMs and RAG approaches. They also found the paper well-written and noted meaningful insights from the experiments. While there were discussions regarding the significance of the work, the authors provided a high quality response. Overall, I believe this paper exceeds the acceptance threshold.\", \"additional_comments_on_reviewer_discussion\": \"The authors effectively addressed concerns about the significance of the work, as well as the scale, difficulty, and diversity of the benchmark. They also presented additional experiments to compare long-context LMs with RAG approaches. Some reviewers responded positively and raised their scores after reviewing these updates.\"}", "{\"title\": \"Author Response (1/2)\", \"comment\": \"Thank you for your detailed review. We address each of your concerns:\\n\\n1. Benchmark Size:\\nAlthough HoloBench contains only 90 questions, each question is evaluated across numerous configurations, such as context length, information position, and information amount. For instance, testing 5 context lengths, 5 information positions, and 2 information amounts results in 4,500 test cases (90 \\u00d7 5 \\u00d7 5 \\u00d7 2) per model\\u2014a scale unmatched by existing benchmarks mentioned. Moreover, we meticulously crafted our questions to encompass diverse query types and difficulty levels, with each configuration validated for correctness via SQL execution. This comprehensive coverage significantly amplifies the effective size of our evaluation, providing robust insights into model behavior across varying conditions, far beyond what the raw question count implies. \\n\\n\\n2 and 3. Diversity of databases and difficulty of benchmark:\\nTo create HoloBench for evaluating LCLMs, we required large tables capable of generating long contexts. Considering this, we applied the following criteria to ensure diversity in both domains and answers: \\nDatabases must have more than two tables and at least 500 rows in total.\\nEach selected database must belong to a distinct domain.\\nAmong the databases in the SPIDER dataset, only 15 meet the row count requirement: ['soccer_1', 'wta_1', 'baseball_1', 'sakila_1', 'flight_4', 'formula_1', 'college_2', 'bike_1', 'store_1', 'chinook_1', 'world_1', 'csu_1', 'flight_2', 'wine_1', 'car_1\\u2019]. Then, we chose five databases in different domains. \\nFrom these, we selected five databases spanning different domains while also diversifying the number of tables and rows. The selected databases are summarized below:\\n\\n| Base DB | # Tables | # Total Rows |\\n|------------|----------:|--------------:|\\n| Wine_1 | 3 | 555 |\\n| Store_1 | 4 | 2,548 |\\n| College_2 | 9 | 3,554 |\\n| Flight_4 | 3 | 20,989 |\\n| Soccer_1 | 5 | 185,608 |\\n\\nThis selection balances domain variety, table count, and row diversity to create a robust evaluation benchmark.\\nNote that we did consider alternatives such as using external corpora (e.g., Wikipedia) to increase diversity and complexity in the context. However, ensuring control over the relevance of external information proved challenging, making it difficult to guarantee the correctness of the ground truth. To maintain a systematic and automated evaluation process, we chose to limit the context to the content within the databases. \\nAlthough one might assume that LCLMs could easily distinguish between relevant and irrelevant contexts when they are drawn from different tables within a database, our results show that the models still face difficulties in making these distinctions.\\n\\n3. Evaluation fairness and reliability:\\nWhile we rely on GPT-4o-mini for automatic evaluation, we find the evaluations were aligned with human judgements (93.8% agreement).\\n\\n4. Potential information leakage due to publicly released text-to-SQL dataset:\\nWhile this is a valid concern, our benchmark is constructed by dynamically sampling table rows and generating contexts with relevant and irrelevant information. We derive the golden answers for different settings by executing SQL queries on the subtables. This makes it highly unlikely that LCLMs would have any parametric knowledge about the golden answers.\"}", "{\"comment\": \"Dear Reviewer Epz7,\\n\\nThank you again for reviewing our work! Please let us know if we have sufficiently addressed your questions and concerns. As the discussion deadline is approaching, we kindly request a prompt response and ask for a reconsideration of your evaluation score. Thank you for your support!\\n\\nWarm regards\"}", "{\"summary\": \"This paper argues that existing RAG systems struggle with complex tasks that require aggregation and reasoning over information spanning across multiple documents, while Long-Context language models may have the potential to this task but there is no study clearly show that. This paper proposed a new framework called HoloBench, which used database operations to generate context to test whether LCLMs can answer the target type of questions.\\nThe experiments indicate that the amount of information in the context has a bigger influence on LCLM performance than the actual context length as well as the complexity of queries.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It is great to see a set of design principles in a paper that is to propose a benchmarking framework.\\n\\n2. The paper did a comprehensive study of experiments and found interesting observations about factors, such as information amount, context length, information positions and query types and complexity to query performance.\", \"weaknesses\": \"1. The reality and the quality of the corpus/context is unclear. The author decided to generate the context by verbalizing data rows, which may lead to a set of unrealistic documents/passages with a simple description of attribute values in each row. Furthermore, there is no study in this paper that can show that the verbalization process is faithful to the original tables/rows.\\n\\n2. The discussion about RAG and LCLMs are flawed. It does not make sense at all that limited RAG only retrieving 2k tokens and compared it with a language model that can be fed with 4k to 16k context.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for recognizing our comprehensive experimental analysis and clear design principles. We address your main concerns:\\n\\n1. Context quality:\\nUnlike model-based systems, we **ensure faithful and natural text generation** through manually crafted templates for each table (examples in Table 4). This template-based approach **guarantees complete preservation of the original tabular data** while maintaining fluent text, preventing hallucination or information loss that could occur with neural generation methods.\\n\\n2. RAG Comparison:\\nWe acknowledge that our RAG setup in the original paper could have been more comprehensive. To address this concern, we conducted additional extensive experiments that provide deeper insights into the relationship between retrieval size and model performance.\\n\\n## Additional Experiments with Varying Retrieval Sizes\\n\\nFirst, we investigated scenarios where half of the 64k context (i.e., 32k tokens) contains query-relevant information, examining performance trends across different retrieval sizes (2k, 4k, 8k, 16k, 32k). We used Llama-3.1-405b for these experiments.\\n\\nThe results demonstrate that performance peaks when the retrieval size (32k) matches the actual amount of relevant information in the context. This supports our paper's argument that reducing unnecessary information can enhance reasoning capabilities.\\n\\n### Table 1: Accuracy with information density = 0.5\\n\\n| # Retrieved Tokens | 2k | 4k | 8k | 16k | 32k | 64k (LCLMs with full context) |\\n|-------------------|------|------|------|-------|-------|---------------------------|\\n| RAG | 21.70 | 25.28 | 27.02 | 34.91 | 37.34 | 30.18 |\\n\\n## Impact of Information Density\\n\\nHowever, in real-world applications, the density of relevant information isn't known a priori. To investigate this, we conducted experiments with contexts where all 64k tokens are query-relevant. Note that due to different answer distributions, the absolute values between information density = 0.5 and 1.0 scenarios aren't directly comparable.\\n\\nThe results show that while using all 64k tokens achieved the best performance, the margin of improvement over 32k retrieval was surprisingly small. This suggests that current LCLMs still face limitations in processing ultra-long contexts, indicating significant room for improvement in their long-context reasoning capabilities.\\n\\n### Table 2: Accuracy with information density = 1.0\\n\\n| # Retrieved Tokens | 2k | 4k | 8k | 16k | 32k | 64k (LCLMs with full context) |\\n|-------------------|------|------|------|-------|-------|---------------------------|\\n| RAG | 14.85 | 19.18 | 21.79 | 29.36 | 38.69 | 39.06 |\\n\\n## Retrieval Quality Analysis\\n\\nWe further investigated whether RAG would be optimal if we knew the distribution of relevant information beforehand. In scenarios with 32k relevant information within 64k context, our analysis using bge-large-en-v1.5 (a state-of-the-art retrieval system) achieved:\\n\\n- **Precision**: 83.92%\\n- **Recall**: 80.89%\\n\\nThese results, while strong, highlight that even in ideal settings, retrieval errors remain significant.\\n\\n## Key findings:\\n\\n1. While RAG can outperform LCLMs when the optimal retrieval size is known (matching the amount of relevant information), this information is rarely available in practical settings\\n2. Even with state-of-the-art retrieval models (bge-large-en-v1.5), substantial retrieval errors persist (Precision: 83.92%, Recall: 80.89%), highlighting the inherent challenges in information selection\\n3. The practical deployment of long-context reasoning systems requires either:\\n - Development of more adaptive and robust retrieval mechanisms that can dynamically adjust to varying information densities, or\\n - Enhancement of LCLMs' capabilities to process and reason over ultra-long contexts more effectively\\n\\nThese findings suggest that the challenges in long-context reasoning cannot be solved by simply increasing retrieval size or context length, but rather require fundamental improvements in both retrieval quality and model capabilities.\\n\\nThank you for your valuable feedback!\"}", "{\"summary\": \"This paper presents HoloBench, a benchmark designed to evaluate the holistic reasoning capabilities of Long-Context Language Models (LCLMs). While Retrieval-Augmented Generators (RAGs) effectively access information from large document collections, they struggle with complex tasks that require aggregation and reasoning across multiple documents. The authors refer to this as holistic reasoning, for which long-context language models have been developed. HoloBench aims to evaluate LCLMs against RAGs under various conditions.\\n\\nAlthough similar datasets have been created in the past, this dataset is more comprehensive in terms of context length, information density, information distribution, and query complexity, and it includes automated evaluation methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The code has been released, and the data is reproducible. I believe the authors are providing a valuable service to the community.\", \"weaknesses\": \"From a technical standpoint, this method is relatively straightforward.\", \"questions\": \"How do you differentiate between long but simple questions and genuinely complex ones?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a new comprehensive benchmark to test the reasoning capabilities of LCLM across multiple dimensions. The authors unveil a range of novel insights from this benchmark, by comparing the performance of recent LCLM on diverse scenario (position of the information, density of information, complexity of the query).\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The authors :\", \"Proposed a new, original, comprehensive and timely benchmark dataset and framework based on text-to-sql for consistent long context LM benchmarking\", \"Important benchmark dataset and frameworks, showing novelty into aspects of LCLM seemingly not thoroughly explored yet\", \"Identified a range of novel and important insights about LCLM reasoning (lost in the middle comparison, model size, RAG comparison, query complexity effect, CoT importance)\", \"Tested recent LCLM (Llama 3.1, GPT4o, Claude 3.5, Gemini 1.5)\", \"Used LLM-based evaluation metrics, but which showed 93.8% agreement with human judgements\", \"Good paper layout to quickly identify important insights derived from the benchmark\", \"Good discussion points on RAG contribution\"], \"weaknesses\": [\"Minor: \\\"Information density\\\" concept could be further detailed\"], \"questions\": \"/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer p5kH,\\n\\nThank you again for your feedback and time. As today is the final day for score updates, we kindly ask if our response and revisions have sufficiently addressed your concerns and if you might reconsider your evaluation score. \\n\\nWe greatly appreciate your support!\"}", "{\"comment\": \"Dear Reviewer p5kH,\\n\\nThank you again for reviewing our work! Please let us know if we have sufficiently addressed your questions and concerns. As the discussion deadline is approaching, we kindly request a prompt response and ask for a reconsideration of your evaluation score. Thank you for your support!\\n\\nWarm regards\"}" ] }
5Ky0W6sp8W
The Buffer Mechanism for Multi-Step Information Reasoning in Language Models
[ "Zhiwei Wang", "Yunji Wang", "Zhongwang Zhang", "Zhangchen Zhou", "Hui Jin", "Tianyang Hu", "Jiacheng Sun", "Zhenguo Li", "Yaoyu Zhang", "Zhi-Qin John Xu" ]
Large language models have consistently struggled with complex reasoning tasks, such as mathematical problem-solving. Investigating the internal reasoning mechanisms of these models can help us design better model architectures and training strategies, ultimately enhancing their reasoning capability. In this study, we constructed a symbolic dataset to investigate the mechanisms by which Transformer models employ vertical thinking strategy based on their inherent structure and horizontal thinking strategy based on Chain of Thought to achieve multi-step reasoning. We introduced the concept of buffer mechanism: the model stores various information in distinct buffers and selectively extracts them through the query-key matrix. We proposed a random matrix-based algorithm to enhance the model's reasoning ability, resulting in a 75\% reduction in the training time required for the GPT-2 model to achieve generalization capability on the PrOntoQA dataset. These findings provide new insights into understanding the mechanisms of large language models.
[ "Large language model", "buffer mechanism", "thinking strategies", "multi-step reasoning" ]
Reject
https://openreview.net/pdf?id=5Ky0W6sp8W
https://openreview.net/forum?id=5Ky0W6sp8W
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ziMpwObdXf", "xA8fb1Bpp2", "wfpmarqnhm", "uBNSBypjcg", "r9DaUPH10C", "qXkLe8wWSb", "gkhPUSEzG7", "fMBH4uRusr", "fKFKF7Ga6v", "eU4GBVKI0J", "eEfucyBnRn", "dhLLPiyDba", "cvXNb1CK7F", "aKaFvJPQyt", "ZqywhMq2fs", "Zm0pFgJTmD", "Yqm5Z7ZJhH", "TjH7VY1ytm", "TM4ipZ8YQq", "PrlgWuGwV2", "O4jg0wHdTL", "M5mHPYENby", "HBESCGplXe", "FYXA1a6wgs", "E7NXwN803m", "DbkcNihr2J", "BUIlDsjyoD", "7uNN74fQdD", "7N3xXvZoRP", "72wAlv7AMf", "3kSFHZxUHR", "31RXhfHhD1", "2Ad5CRhzMm", "26KJjHNyMT", "1xYtiqMaq2", "1q0YdOtmyD" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731951746559, 1730604369550, 1732021967868, 1730470566065, 1732457208523, 1730567511642, 1732438411640, 1733210586634, 1731954447262, 1731951877438, 1732325143537, 1733117757908, 1732457326739, 1732636204024, 1732457727110, 1737523779235, 1731953942620, 1732586836738, 1732706308239, 1731949559743, 1733312172580, 1732624360744, 1731954689029, 1731982179066, 1730232269574, 1732706640525, 1733170747505, 1733117401783, 1733198591233, 1732292769627, 1732383365061, 1734919209904, 1731953766335, 1732457799797, 1732900717730, 1732975256155 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_fnjG" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_r2jr" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_JU6V" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_JU6V" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_fnjG" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_JU6V" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_fnjG" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_mhu8" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_mhu8" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_mhu8" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_fnjG" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_r2jr" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Area_Chair_zSuH" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ], [ "ICLR.cc/2025/Conference/Submission6602/Reviewer_r2jr" ], [ "ICLR.cc/2025/Conference/Submission6602/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your constructive comments on our paper. Below are our detailed responses to each of your points:\\n\\n**[R1W1.1] [Synthetic Tasks]**\\n\\n1. **[Why start from synthetic tasks?]** Due to the complexity of linguistic data and large language models, it is challenging to identify key mechanisms through direct research on real language data, limiting us to relatively macro-level statistical analyses. We chose to conduct research on simplified symbolic datasets, which allows us to perform sufficient numerical experiments with minimal computational resources and to closely observe the internal workings of the Transformer model. The buffer mechanism we proposed aids in understanding the expressive capabilities of Transformer models.\\n\\n2. **Applicability to real large language models.** In the revised manuscript, we have added experiments calculating the matching score and kernel score in the Phi-3 model. We provided methods for computing the matching score and kernel score in multi-head models. By calculating the kernel score, we assessed the strength of inter-layer information interactions in Phi-3 and computed the matching score for each head in each layer. We found:\\n\\n - **(a)** When $l_1 \\\\leq l_2$, $W^{qk(l_1)} W^{vo(l_2),T}$ exhibits almost no significant features (kernel score \\u2248 0).\\n - **(b)** When $l_1 > l_2$, the kernel score of $W^{qk(l_1)} W^{vo(l_2),T}$ decreases as $l_1 - l_2$ increases. These observations indicate that information stored in earlier layers (by buffer $W^{vo(l_2)}$) is extracted by later layers (via $W^{qk(l_1)}$), and the model tends to extract the most recently processed information.\\n - **(c)** Heads with high matching scores are mostly concentrated in the middle layers, which aligns with previous findings that reasoning in large models predominantly occurs in the middle layers.\\n\\n3. **Algorithmic insights from synthetic tasks.** The buffer mechanism we discovered based on synthetic tasks can inspire algorithm design. For example, as shown in Figure 8, the RMBA algorithm proposed\\u2014motivated by the buffer mechanism\\u2014can also aid in training real large language models.\\n\\n**[R1W1.2] [Prior Work]**\\n\\nAccording to our literature review, most of the current work in the field on multi-hop reasoning are evaluations of existing large models or use causal intervention starting from results to identify key pathways. In the *Related Work* section of the revised manuscript, we have added references to several recent articles related to multi-hop reasoning and causal intervention and discussed the novelty of our work. If the reviewer could provide more relevant references, we can more specifically discuss the connections and differences between our work and these studies.\\n\\n**[R1W2] [Causal Intervention]**\\n\\n1. **Complementarity with our approach.** First, there is no contradiction between these two studies, as causal intervention is one method to find key information transmission paths, while the buffer mechanism attempts to explain how this information transmission occurs. Moreover, causal intervention can only identify key paths but cannot fundamentally understand how these paths are generated, offering limited guidance for algorithm design. Our paper aims to study what kinds of model weights cause these paths to emerge, providing a more detailed modeling of the implementation mechanisms of causal paths. Accordingly, the proposed RMBA can also promote the generation of reasoning logic circuits.\\n\\n2. **Progress through synthetic tasks and aligned weight matrices.** Some existing works have made significant progress via \\\"synthetic tasks\\\" and \\\"weight alignment.\\\" For example, Boix-Adsera E, et al.\\u00a0[1] proposed the IMBA by studying single-step reasoning, which has been proven to aid in learning simple reasoning data. However, from the buffer perspective, we found that this algorithm causes the buffer to degenerate, making it difficult to learn multi-step reasoning. Therefore, we propose the RMBA to further improve the model's accuracy in multi-step reasoning tasks.\\n\\n [1] Boix-Adsera E, et al. *When can transformers reason with abstract symbols*\\n\\n3. In the field of biological neural networks, performing causal intervention is common because the methods for recognizing and processing biological signals are limited. However, in artificial neural networks, we have richer means to recognize and process signals, allowing us to obtain deeper insights than those in biological neural network research.\"}", "{\"summary\": \"The paper studies the 'buffer mechanism' in transformers for solving a in-context multi-hop reasoning task. The task requires chaining associations present in the context (e.g. \\\"a->b, b->c\\\") to perform multi-hop reasoning (e.g. \\\"a->c\\\"). Inspired by the copying + induction head mechanisms found in prior work, the authors hypothesize that transformers trained to perform this task sequentially look up the necessary reasoning steps and form intermediate answers at the last token position, so that the second layer looks up the first hop, the third layer looks up the second hop, etc. By default the intermediate representations persist in the residual stream at the last token; to prevent the intermediate representations for one hop from interfering with those for a later hop, the authors hypothesize that the intermediate representations are kept in separate 'buffers', i.e. linearly isomorphic subspaces that are disjoint from each other.\\n\\nThe authors empirically test the hypothesis by measuring the alignment of certain weight matrices in a 3 layer transformer trained to do 2-hop reasoning. Further, the authors proposed a parametrization of the attention weight matrices that biases the network towards forming these disjoint subspaces that reduce interference. They find that this parametrization reduces the number of training steps needed for a 12-layer transformer to learn the ProntoQA task.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Multi-hop reasoning in language models is an important problem to study\", \"The figures are beautifully made\", \"The observation that multihop reasoning can generalize to unseen tokens because the embeddings at initialization already have the required properties is neat\"], \"weaknesses\": [\"The paper centers its analyses on small transformers trained on synthetic tasks. It is not clear if any finding can be transferred to practical language models trained on text. Prior work has already studied two-hop reasoning in practical language models in more realistic tasks --- a more convincing demonstration of the buffer mechanism is to identify it in these settings.\", \"Overall, the empirical analyses are weak and can benefit from more careful experimental techniques. In particular:\", \"In sec 3.1, the authors argue that a particular transformer that they trained exhibit the buffer mechanism by comparing metrics about how aligned certain weight matrices are. This is much weaker and more circumstantial than directly performing causal interventions on the activations to verify their claims.\", \"Further, the authors did not empirically compare the buffer mechanism with any competing mechanisms. Off the top of my head, I imagine one competing explanation would be that instead of storing intermediate representations in distinct buffers, maybe the transformer instead learns to overwrite stale representations with new representations. Such a mechanism could conceivably still possess similar weight alignment patterns as the buffer mechanism.\", \"Exacerbating this, the authors only performed this analysis on a 3-layer transformer, whereas their diagram (Fig. 3) and their theoretical construction would work for arbitrarily deep models. It is conceivable that interesting divergences between the buffer mechanism and the overwriting mechanism may only appear at deep enough models.\"], \"questions\": [\"I can understand the idea behind matching score, but can you provide motivation for the kernel score? Why is there a factor of $n$ in the numerator? My understanding is that Ker should be a $d_m \\\\times d_m$ matrix, so it's mystifying to me why $n$ is there. It's also really hard to interpret this quantity. In comparison, I think your matching score is makes a lot more sense, because the softmax constrains it to [0,1], and so I know how to interpret values. My personal guess is that operator norm of the difference between the kernel and identity would be a good metric, but I'd be curious to hear the reasoning behind your choice.\", \"Do you perform hyperparameter sweeps for the results in sec. 5? If not, the finding that RMBA speeds up training could simply be because the parametrization some how matches the default hyperparameters you use, and not because of any real gains. I believe at the very least you should sweep learning rate for each of the 3 methods and record the best result.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to reiterate our sincere gratitude for the recommendations you provided. Moreover, we genuinely appreciate your willingness to raise the score for our work. Should you have any additional questions or comments, please do not hesitate to share them with us.\"}", "{\"summary\": \"This paper proposes a mechanistic analysis of multi-step reasoning in language models. In particular, the authors design a synthetic task that requires the model to construct an n-step deductive reasoning chain. In this setting, the authors show that a decoder-only single-head transformer implements what they term a \\u201cbuffer mechanism\\u201d: the attention weight matrices W_q and W_k act as information-extractors that retrieve information from a set of buffers implemented by the W_v and W_o matrices. This framework is first formally explained, then empirically verified. The authors then connect this result to the horizontal computation involved in CoT reasoning. Additionally, the paper proposes an approach to modify the attention weight matrices in such a way to create an additional buffer, and presents evidence showing its effectiveness both on the synthetic task designed by the authors and on ProntoQA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The problem address is Interesting and relevant.\", \"Solid experimental evidence support the hypothesized mechanism.\", \"The authors use insights on the model's internal mechanics to propose a method for improving training, which is show to be effective.\"], \"weaknesses\": \"The link to CoT reasoning is somewhat tenuous. Although the authors demonstrate how a 2-layer transformer could theoretically implement a multi-step reasoning algorithm for the synthetic task, this does not imply that a transformer-based language model uses the same approach to perform CoT reasoning. The statement, \\u201cin Fig. 6 we illustrate how the Transformer model employs CoT for multi-step reasoning\\u201d (lines 341-342), feels overly assertive and lacks supporting empirical evidence.\", \"questions\": \"Can the authors provide empirical evidence about the hypothesized CoT mechanism?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the reviewers for carefully reviewing our numerous revisions and providing further valuable suggestions. Below, we address your comments one by one. Corresponding second-round modifications are highlighted in the revised manuscript using blue text.\\n\\n[Hyperparameter Sweep]\\nWe assume the reviewer is referring to experiments conducted under the configuration of $d_m=256, d_k=128$. Indeed, as shown in the figure, both the baseline and the IMBA method are highly sensitive to random seeds (primarily affecting the initialization of weights). Among three trials with different random seeds, only one accuracy curve aligns with the accuracy curve corresponding to RMBA.\\n\\nFrom a rigor perspective, we agree that the statement in the manuscript, \\n>\\\"The findings indicate that under these configurations, the RMBA consistently improves the accuracy on this reasoning task,\\\"\", \"should_be_revised_to\": \">\\\"The findings indicate that RMBA parameterization can more robustly reach a high accuracy by a certain number of training steps across a wider range of hyperparameters.\\\"\\n\\n[Competing Mechanisms]\\n\\n1. Overwrite Mechanism\\n\\nTo ensure clarity and avoid ambiguity, we enumerate two overwrite mechanisms we can conceive of:\\n\\n(1) For the same token, the information [a] stored in the buffer $W^{vo(l_0)}$ is replaced by new information [b] generated in layer $l_0+1$ (or subsequent layers).\\n\\n(2) For two different tokens (denoted as token 1 and token 2), the buffer $W^{vo(l)}$ stores information [a] and [b] respectively. Similar to CoT strategies, the model uses information [a] from token 1 during the first reasoning step and information [b] from token 2 during the second reasoning step. Both reasoning steps utilize information from the same buffer $W^{vo(l)}$.\\n\\nFor overwrite mechanism (1), theoretically, this situation is impossible. Each layer has a new $W^{vo(l)}$, and due to the inherent properties of the Transformer architecture, new information [b] from layer $l_0+1$ will be passed in the form of [b]$W^{vo(l_0+1)}$, not [b]$W^{vo(l_0)}$. However, in practice, special cases may arise where $W^{vo(l)}=W^{vo(l_0+1)}$. A simple validation method is to compute the cosine similarity of $W^{vo(l)}$. In the revised manuscript, we include heatmaps of cosine similarities between $W^{vo(0)}, W^{vo(1)}, W^{vo(2)}, I$ in Appendix D.3 to confirm that they represent four distinct buffers rather than a single one.\\n\\nFor overwrite mechanism (2), we have added Appendix J to present the causal intervention experiments. In contrast to previous causal intervention methods that replace all information within a token, we narrowed the scope to focus on a specific buffer within the token. First, we identified critical tokens and logical circuits through masking the information transfer paths. Then, we selectively replaced information in each buffer of the critical token to observe the model\\u2019s output. Results show that in layer $l$, information in the buffer $W^{vo(l-1)}$ of the final token is crucial, whereas altering information in (final token\\u2019s) other buffers does not affect the model's output. Coupled with the near-zero cosine similarity between $W^{vo(0)}, W^{vo(1)}, W^{vo(2)}, I$ mentioned above, we can assert that the model utilizes distinct buffers rather than overwriting the same buffer.\\n\\n2. Induction Heads\\n\\nThe buffer mechanism proposed in this work can be regarded as an extension and supplement to the induction head mechanism. From a functional perspective, \\\"same token matching\\\" mentioned in this work is similar to induction heads. However, a key objective of this work is to demonstrate that **multi-step reasoning tasks are not equivalent to performing independent single-step reasoning multiple times.** This distinction highlights the research value of studying multi-step reasoning. To this end, we provide the following evidence (focusing on symbolic multi-step reasoning tasks):\\n\\n(1)From the perspective of mechanism analysis, based on Figures 3 and 4 and the new added causal intervention experiments, the model always stores new reasoning results in a new buffer and then performs same-token matching. The multi-step reasoning mechanism can be seen as a generalization of induction heads. Unlike \\\"independent induction head usage,\\\" we require buffer differentiation across layers, which has not been discussed in prior work.\"}", "{\"summary\": \"The paper proposes a \\\"Buffer Mechanism\\\" that could be used by Transformers to perform reasoning tasks. For a specific synthetic reasoning task, the authors give a concrete mathematical description of how the proposed Buffer Mechanisms might implement vertical-style reasoning (through the layers of a Transformer), and horizontal-style reasoning (like Chain-of-Thought). The authors also suggest a tweak (arising from their Buffer Mechanism formulation) by which the performance of Transformers might be enhanced.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The proposed Buffer Mechanism makes sense, though its description seems a little obfuscated. The paper sets an ambitious and laudable goal : Understanding of how Transformers can implement the learning of reasoning. The experiments included demonstrate that, when measured using benchmarks chosen by the authors, the method suggested is effective. The performance-enhancing tweak suggested makes sense in the context of making reasoning mechanisms more learnable overall - certainly an interesting direction.\", \"weaknesses\": [\"The paper make the bold assumption that the \\\"Buffer Mechanism\\\" that is described is the specific mechanism that Transformers are using to perform reasoning. Before being proven, this seems like it should be treated more like a hypothesis. And the experiments given do not prove anything in general, rather point towards this mechanism being plausible for their particular set-up. For example:\", \"L079 : \\\"Specifically, we found that Transformers utilize a Buffer Mechanism when engaging in complex reasoning\\\". As far as this reviewer can ascertain, this was not proven, even experimentally.\", \"L091 : \\\"We discover the buffer mechanism employed by language models\\\" - again this is a very broad claim, given the evidence shown.\", \"etc...\", \"So perhaps, it would be better to restate quite a few of the claims made...\", \"L501 : \\\"we investigated the buffer mechanism employed by Transformer models\\\" -> \\\"we investigated how Transformers might implement a buffer mechanism\\\"\"], \"in_the_section_around_l248\": \"\\\"This indicates that the model has learned the underlying patterns in the data.\\\" This seems like a statement about a property of model training, whereas the result indicated by Eqn 11 depends on L238 (\\\"When same-token matching happens\\\") - which appears to be circular logic. (i.e. given that it has learned X, then it has learned X')\", \"minor_things\": [\"Figure 1 : Testing each LLM on the same question 9 times, and listing out the results (rather than, for instance, showing a histogram) really emphasised how little the performance of these LLMs was investigated\", \"Figure 2 : \\\"Serialized Reprentation\\\" -> \\\"Serialized Representation\\\"\"], \"questions\": \"Would it be reasonable to have a mental model of the Buffer Mechanism as being the result of there being a basis (~L209) in which the layer-wise transformations/matching for the Transformer internal representations are approximately block-diagonal (these sub-representations being the buffers)?\", \"figure_5a\": \"Why does the test accuracy go down before epoch 50? And then leap up? What happens at epoch 50?\\n\\nHow are the embeddings of the tokens learned (in particular the OOD tokens)? \\n\\nDoesn't Appendix D indicate that \\\"The Buffer Mechanism\\\" presented in the body of the paper is likely only a partial explanation of what is actually being performed by Transformers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"[Re: R2W1] [Overclaim]\\n\\nPerhaps it wasn't clear (and another reviewer picked up on the same thing) : The paper is proposing *A* Buffer Mechanism, and appears to find that such a mechanism can be detected when it is looked for. But the paper doesn't show that transformers are solely using *THE* Buffer Mechanism that you are proposing. There could be many mechanisms, but they were not looked for here. Certainly, if you looked for a Buffer Mechanism and didn't find any evidence, then that proposal wouldn't even be in a paper. Here, you did find what you were looking for, but that is not proof that you have discovered THE Buffer Mechanism used by Transformers, nor that other mechanisms aren't being used as well.\\n\\nYour update (restricting the claim to \\\"symbolic multi-step reasoning task\\\") doesn't tackle the heart of the issue (which is present in the title of the paper, so it may be be difficult to change course at this point).\\n\\n\\n[Re: R2W3] [Minor Things]\\n\\nGood update, which supports the idea that large-models require CoT much more clearly.\\n\\n\\n[Re: Diagonal Structure in Real Models]\\n\\nGood to see that the same Buffer Mechanism can be seen in a real model (Phi-3).\\n\\nBut again, finding something that you were looking for is only evidence that your claims might be true (your claim in the Introduction is \\\"We discover the buffer mechanism employed by language models during the reasoning process in symbolic multi-step reasoning tasks\\\"). It seems like \\\"We found evidence that supports a buffer mechanism (which we describe herein) being employed by language models during the reasoning process in symbolic multi-step reasoning tasks\\\" would be better supported.\\n\\n\\n[Re: R2Q2] [Accuracy Suddenly Improved]\\n\\nThis reviewer finds it odd that epoch=50 is such a turning point. In (for instance) grokking behaviours, the turning points are spread over an order of magnitude (or more). So the sudden take-off is remarkable.\\n\\n\\n[Re: R2Q3] [OOD Embedding]\\n\\nNow this part seems like a much stronger argument in favour of a buffer mechanism being a prime-candidate as an explanation (because there are dramatically fewer mechanisms that could make use of untrained embeddings, making your Buffer Mechanism a stronger candidate). It should definitely be made clearer that the OOD embeddings are not trained (i.e. left at random initialisation).\\n\\n\\n[Re: R2Q4] [Parallel Reasoning]\\n\\nAgain, not being in conflict with the main text is not the same as being strong evidence for the theory.\\n\\n\\n[Overall:]\", \"my_ratings_are_unchanged\": \"I'm relucant to increase (despite the obvious hard work the authors have performed), since the main claim is a short distance from a presentation that starts \\\"We discovered how transformers do reasoning!\\\"\"}", "{\"comment\": \"Dear Reviewer,\\n\\nFirst and foremost, we would like to express our sincere gratitude for the substantial and constructive feedback you have provided on our work. Your insightful suggestions have greatly contributed to the overall improvement of our manuscript. We highly appreciate your decision to revise your initial evaluation, and we respect your final assessment of our paper.\\n\\nMany of your suggestions have been particularly thought-provoking. For instance, the mention of causal intervention led us to consider the potential development of a buffer-level causal intervention method (similar to Appendix J) in future work. This approach could allow for better identification of key information and further support the existence of a buffer mechanism within large models. While we acknowledge your concern that the three pieces of evidence we provided may not be directly conclusive, we still believe that there is no need for undue skepticism regarding the presence of the buffer mechanism in large language models. At the very least, our work demonstrates that the Transformer architecture can leverage buffer mechanisms to process diverse information, showcasing its architectural advantages. In future research, we will explore how to identify more concrete examples for the direct validation of the buffer mechanism\\u2019s existence and aim to develop more refined causal intervention methods.\\n\\nOnce again, we sincerely appreciate your positive and constructive feedback throughout this process. We wish you all the best in your future endeavors.\"}", "{\"comment\": \"Thank you for your constructive comments on our paper. Below are our responses to the issues you raised:\\n\\n**[R3W1] [Overclaim]** We thank you and the other reviewers for highlighting the overclaim issues. In the revised manuscript, we have addressed these concerns by adding the qualifier *\\\"symbolic multi-step reasoning tasks\\\"* to the relevant statements.\\n\\n**[R3Q1] [Empirical Evidence]**\\n\\n1. In the revised manuscript, we add experiments in the appendix G on how to train a 2-layer model to achieve symbolic multi-step reasoning tasks by performing CoT. In this experiment, we trained a Transformer using only single-step reasoning data. During the testing phase, we allowed the model to use CoT for reasoning by re-inputting the output information from each step back into the model. This enabled us to achieve 2-step and 3-step reasoning. In the appendix, we present the actual information flow diagrams rather than the schematic diagrams shown in the main text. We hope this experiment can supports our viewpoint.\\n\\n2. The core idea of this section is that, unlike vertical thinking where the model needs to form multiple buffers, when using CoT, the model repeatedly overwrites the same buffer to achieve multi-step reasoning. Therefore, during the training phase, only a few weight matrices are needed to form the reasoning circuit, which significantly reduces the training difficulty of the model. We have added these points to the main text.\\n\\n**[Other Revisions]** In addition, we have incorporated suggestions from other reviewers, adding visualizations of the matching scores in the Phi-3 model and conducting a detailed parameter sweep for the RMBA experiments. We have also enriched the explanations of some figures and formulas in the paper, and rewritten content that might cause ambiguity, enhancing the rigor of the manuscript. The specific changes can be found in the common response. Your comments, along with those of other reviewers, have greatly helped improve the quality of our paper.\\n\\n**We look forward to your reply!**\"}", "{\"comment\": \"**[R1W3] [Competing Mechanisms]**\\n\\n1. **Existence of the overwrite buffer mechanism.** The overwrite buffer mechanism you mentioned does exist. In Section 4 of our paper, the Chain-of-Thought (CoT) method conduct multi-step reasoning by overwriting the same buffer, which explains why CoT can enhance model reasoning (because it only requires aligning 2 pair of weight matrices).\\n\\n2. **Non-existence in vertical thinking frameworks.** However, for the vertical thinking framework, the overwrite buffer mechanism does not exist because the weight matrices (buffer) $W^{vo(l)}$ of each layer are different.\\n\\n**[R1W4] [3-Layer vs. Arbitrarily Deep Models]**\\n\\nThe buffer mechanism was inspired by our observations in 3-layer models. In fact, we have also conducted numerical experiments on deeper models (e.g., 4-layer models performing 3-step reasoning), and the conclusions are consistent. The theoretical model we constructed (Eq. 14 & 15) is applicable to models with an arbitrary number of layers. Figure 3 is an embellished schematic diagram; The actual information flow diagrams derived from the theoretical model can be found in Appendix D3.\\n\\nAdditionally, in the appendix, we have added experiments training a 2-layer Transformer using 1-step reasoning data, where the information flow is the same as what we presented in the main text. We also demonstrated that this network, combined with the CoT strategy, can perform higher-step reasoning, such as 3-step reasoning. These experiments further illustrate the generality of the buffer mechanism.\\n\\n**[R1Q1] [Kernel Score]**\\n\\nThere was a typo in the original manuscript regarding the definition of the kernel score. Our original intention was to define Kernel Score (Ker) = mean(|$Ker_{ii}$|) / mean(|$Ker_{ij}$|). Thank you for pointing this out. In the revised manuscript, we have unified the definitions of the kernel score and matching score. We define the kernel score as Trace(softmax(Ker)) / $d_m$. Kernel = I provides the intrinsic driving force for the matching matrix $h(X)=I$ in out-of-distribution (OOD) tokens. If we do not consider nonlinear effects such as LayerNorm and feed-forward networks (FNN), the kernel matrix simplifies the study. However, since the matching score is directly related to which tokens the model focuses on and can consider nonlinear effects, it is indeed a more meaningful metric in practical applications.\\n\\n**[R1Q2] [Hyperparameter Sweep for RMBA]**\\n\\nTo make the results more convincing, we have added in the appendix of the revised manuscript the impact of different hyperparameters on the results of these three methods. We selected 4 weight decay values, 5 learning rates, and 4 configurations of hidden space dimensions. Under each setting, we conducted 3 experiments for each of the 3 algorithms (baseline, IMBA, RMBA), **totaling 720 experiments**. In most cases, RMBA achieved better results than the other algorithms.\\n\\n**[Other Revisions]**\\n\\nIn addition, we have incorporated the suggestions of other reviewers to enrich the explanations of some figures and formulas in the paper. We have rewritten content that might cause ambiguity, enhancing the rigor of the manuscript. The specific changes can be found in the common response. Your comments, along with those of other reviewers, have greatly helped improve the quality of our paper.\\n\\n**We look forward to your reply!**\"}", "{\"comment\": \"__Hyperparameter sweep__\\n\\nThanks for doing the experiments. It looks like for the optimal setting (weight decay=0.1, learning rate = 0.0001), all three parameterizations can sometimes achieve the same accuracy at the same time. Is the interpretation then that the RMBA parameterization can more robustly reach a certain accuracy by a certain number of training steps across a wider range of hyperparameters?\\n\\n__Competing mechanisms__\\n\\nI'm not convinced that your experiments show that the overwrite mechanism doesn't exist in the vertical thinking framework. Can you specify how you think the kernel or matching scores will be different for the overwrite vs buffer mechanism, and show evidence for one mechanism over the other?\\n\\n__Application to Phi-3 model__\\n\\nI understand that you have plotted the kernel scores for Phi-3, which essentially measures some form of alignment between the weight matrices from attention heads in one layer to another. I understand that you found that this alignment for $W^{QK}W^{VO\\\\top}$ only exists if $W^{QK}$ comes from a later layer than $W^{VO}$. Could you explain how this is evidence for the buffer mechanism per se, and not an example of the many weight alignment phenomena found? For example, the original induction head paper also found alignment between copying and induction heads (Elhage et al, 2021). I'm guessing one part of the evidence is to show that $W^{VO}$ and $W^{QK}$ are individually not diagonal, but their composition is.\\n\\n__General comment about experimental rigor__\\n\\nOverall, I'm not convinced that enough rigor has gone into experiment design. I'm happy to see that the authors take steps to improve the paper in this regard with the hyperparameter sweep, but I believe more should be done. Specifically, the main evidence presented (kernel score and matching score) are implications of the buffer mechanisms, but are insufficient for showing their existence. If the authors can provide coherent and definitive experiments that rule out plausible mechanisms (overwrite, ordinary induction heads), I will raise my score accordingly.\\n\\n\\nAubry, Murdock, et al. \\\"Transformer Block Coupling and its Correlation with Generalization in LLMs.\\\" arXiv preprint arXiv:2407.07810 (2024).\\n\\nElhage, Nelson, et al. \\\"A mathematical framework for transformer circuits.\\\" Transformer Circuits Thread 1.1 (2021): 12.\"}", "{\"comment\": \"Dear reviewer, we would like to once again express our sincere gratitude for your constructive feedback and for recognizing the improvements we made to our manuscript. Over the past two weeks, we have further revised the paper by incorporating both your suggestions and those of the other reviewers. Specifically, we have added **9 pages** of appendices and made detailed updates to the main text. Below is a summary of the key changes we have made:\\n\\n> 1. We have added Appendix G to present the detailed inference processes of **Chain-of-Thought (CoT)** reasoning. Additionally, **(new)** we report the accuracy of the model trained with 1-step reasoning data on tasks requiring 1 to 6 steps of reasoning.\\n\\n> 2. We have conducted experiments on the **real language model Phi-3**, providing three strong pieces of evidence to support the high likelihood that the buffer mechanism is adopted by large models: (1) the weight alignment phenomenon, (2) **(new)** the reasoning head positions identified using the matching score, which coincide with findings from previous studies, and (3) **(new)** the near orthogonality of the {$W^{vo(l)}$} matrices across layers in Phi-3.\\n\\n> 3. We have performed **hyperparameter sweep for the RMBA-related experiments**, conducting a total of 720 experiments to comprehensively validate the role of RMBA in enhancing the reasoning capabilities of models on symbolic tasks.\\n\\n> 4. **(new)** We have added Appendix J: Causal Intervention. Through **over 600 causal intervention experiments**, we aim to rule out the possibility that Transformers are using alternative mechanisms when learning from symbolic data.\\n\\n> 5. **(new)** We have revised over-claimed sentences in the main text to **enhance the rigor and precision of the paper**.\\n\\nWe hope that these additional experiments and revisions have further strengthened our paper, and we would greatly appreciate it if you could consider the new results and updates. Your thoughtful feedback has been invaluable to us, and we are confident that these improvements contribute meaningfully to the paper's overall quality. Thank you once again for your time and consideration. We look forward to your continued insights and feedback.\"}", "{\"comment\": \"(2)From a numerical simulation perspective, although there is theoretical and numerical evidence that IMBA promotes the formation of induction heads, applying IMBA to 2-step reasoning tasks shows that it fails to improve model accuracy effectively in most settings. This aligns with our buffer theory, as IMBA sacrifices buffer differentiation. This direct evidence is summarized in Section 5 of the main text:\\n\\n> These results also demonstrate that multi-step reasoning is not achieved by simply ``stacking\\\" multiple single-step reasonings. An algorithm that enhances single-step reasoning may not be applicable to multi-step reasoning. Therefore, investigating the mechanisms of multi-step reasoning is an important and meaningful topic.\\n\\n(3)As shown in the appendix E (parallel reasoning), when the reasoning step count $n \\\\geq$ the number of model layers $L$, the model exhibits parallel reasoning behavior. This further demonstrates that \\\"linearly stacking multiple single-step reasoning processes\\\" cannot describe all mechanisms used by models to achieve multi-step reasoning.\\n\\n[Application to Phi-3 Model]\\n\\nThank you for your insightful suggestions. In Appendix H of the revised manuscript, we include relevant experiments. Contrary to expectations, some heads consistently exhibit diagonal patterns across layers in large models, leading to diagonalized $W^{vo(l)}$ and $W^{qk(l)}$ from a layer-wise perspective. However, we designed two experiments to demonstrate that **the diagonal patterns in $W^{qk}W^{vo,T}$ are not entirely caused by the diagonalization of $W^{qk}$ and $W^{vo}$ themselves but are more often due to weight alignment.**\\n\\n(1)Setting the diagonal elements of $W^{qk}$ and $W^{vo}$ to 0 shows that the kernel score (KS) of $W^{qk}W^{vo,T}$ remains largely unchanged (Figure 21(f)).\\n\\n(2)Using noise-added identity matrices $A=I+aN(0,0.1)$ and $B=I+aN(0,0.1)$, we evaluated their kernel score (KS). When $\\\\max(\\\\text{KS}(A), \\\\text{KS}(B))<0.3$, $\\\\text{KS}(C)<0.025$ for $C=AB$ (Figure 21(d)). However, in Phi-3, we found many cases where $\\\\text{max}(\\\\text{KS}(W^{qk(l_1)}), \\\\text{KS}(W^{vo(l_2),T}))<0.3$ but $\\\\text{KS}(W^{qk(l_1)}W^{vo(l_2),T})>0.3$ (Figure 21(c)). This indicates that weight alignment significantly impacts the formation of diagonal structures.\\n\\nThe primary purpose of including the Phi-3 model section is to address Reviewer [JU6V]\\u2019s question: \\u201cWould it be reasonable for block-diagonal weight?\\u201d We demonstrate that weight alignment phenomena also occur in real large-scale language models. Your suggestion has helped make our explanation more comprehensive and rigorous.\\n\\nAnother function of this section is to extend the matching/kernel score discussed in the main text to the multi-head model scenario. The score, as defined in our framework, effectively reflects the location and characteristics of reasoning heads in the model, aligning well with conclusions from some previous studies.\\n\\nHowever, this example alone cannot strictly and directly support the claim: \\n>The buffer mechanism can also be observed in real language models.\\n\\nIn the revised manuscript, we have removed this overclaim.\\n\\n[Transformer Block Coupling]\\n\\n(Aubry, Murdock, et al. 2024) is an intriguing parallel work that identifies weight alignment phenomena in large models through Jacobian matrices. We will cite this work in the revised manuscript and follow up on its developments in future research.\\n\\n**We sincerely appreciate your thorough and patient review of our manuscript. Please do not hesitate to reach out if you have any further questions or require additional clarifications!**\"}", "{\"comment\": \"I would have taken a different approach with \\\"Toward(s) Understanding How Transformers Deal with Multi-Step Reasoning\\\"...\\n\\nI've increased my score by a notch.\"}", "{\"comment\": \"We appreciate the reviewers for their thorough review of all the comments and suggestions, as well as the detailed feedback on our previous revisions. Below, we address each of your comments individually. Corresponding second-round modifications are highlighted in blue in the revised manuscript.\\n\\n**[Overclaim]**\\n\\nWe agree that our earlier revisions did not fully address the issue of overclaiming. Considering the complexity of language tasks and language models, defining any mechanism clearly within real-world large language models is nearly impossible. Our initial intent was:\\n\\n(1)To identify or propose mechanisms that real language models might employ.\\n\\n(2)To validate that this mechanism is indeed employed by the model in certain simple tasks.\\n\\nTo address this more rigorously, we made the following additional revisions:\\n\\n(a)The title of the paper has been updated to: **A Buffer Mechanism Employed in Language Models to Implement Symbolic Multi-Step Reasoning Tasks** (It\\u2019s allowed by policy)\\n\\n(b)We revised the statement: \\n\\n> We discover the buffer mechanism employed by language models during the reasoning process in symbolic multi-step reasoning tasks\", \"to\": \"> We found evidence that supports a buffer mechanism being employed by language models during the reasoning process in symbolic multi-step reasoning tasks.\\n\\n(c)As described in our second-round response to Reviewer [fnjG], we conducted a series of causal intervention experiments to show that, in a well-trained 3-layer model, altering the information stored in the final token's buffers results in reasoning failure. We believe this task demonstrates that the model employs a buffer mechanism for reasoning in symbolic tasks. If other compensatory mechanisms exist, the model should, at least once, be able to produce correct outputs after critical information for the buffer mechanism has been disrupted. However, this was not observed in our results. The experimental details and methodology can be found in Appendix J of the revised manuscript.\\n\\n**[Accuracy Suddenly Improved]**\\n\\nThis phenomenon may differ from the previously studied grokking phenomenon. It could represent another significant behavior, and we plan to focus on the model's dynamics in subsequent research to investigate this further.\\n\\n**[OOD Embedding]**\", \"we_have_added_the_following_explanation_to_the_main_text_to_highlight_the_principle_behind_ood_generalization\": \"> Furthermore, a much more remarkable observation is that the same-token matching is independent of the specific value of $X_{tgt}$. For example, for $X_{tgt,OOD}$ sampled from the **untrained random vectors** $token_{OOD}$, $h^{(l)}(X_{tgt,OOD}) \\\\approx I$ still holds. Therefore, when the model's weights satisfy $Ker^{(l)} \\\\approx I$, the model exhibits out-of-distribution generalization capability.\\n\\n**[Parallel Reasoning]**\\n\\nWe agree with the reviewer\\u2019s perspective. This section is just a supplement to the vertical reasoning discussion in the main text. In the revised manuscript, we have rephrased the buffer mechanism in the Parallel Reasoning section as \\\"a possible explanation.\\\" We hope this section serves as an open-ended exploration that enriches the article's content and inspires future work, while avoiding overclaiming.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**[R2Q1] [Reasonable]**\\nThere is an intuitive explanation for the diagonal structure observed between the $W^{vo}$ matrix of the previous layer and the $W^{qk}$ matrix of the next layer: information derived in one layer is directly utilized by the next layer, aligning with logical reasoning. \\n\\n1. **Diagonal Structure in Real Models**: \\n In the revised manuscript, we have supplemented experiments using the Phi-3 model to compute matching scores and kernel scores, and defined methods for calculating these metrics in multi-head models. Key observations include: \\n - **(a)** When $l_1 \\\\leq l_2$, $W^{qk(l_1)} W^{vo(l_2),T}$ exhibits almost no significant features (kernel score \\u2248 0).\\n - **(b)** When $l_1 > l_2$, the kernel score of $W^{qk(l_1)} W^{vo(l_2),T}$ decreases as $l_1 - l_2$ increases. These observations indicate that information stored in earlier layers (by buffer $W^{vo(l_2)}$) is extracted by later layers (via $ W^{qk(l_1)}$), and the model tends to extract the most recently processed information.\\n - **(c)** Heads with high matching scores are mostly concentrated in the middle layers, which aligns with previous findings that reasoning in large models predominantly occurs in the middle layers.\\n\\n2. **Improving Reasoning**: \\n Both the RMBA algorithm proposed in our paper and the IMBA algorithm by Boix-Adsera E, et al.[1] improve the model's reasoning capability. This supports the conclusion that the diagonal structure of $W^{qk(l_1)}W^{vo(l_2),T}$ facilitates information transfer. \\n *Reference*: [1] Boix-Adsera E, et al., *When can transformers reason with abstract symbols* \\n\\n3. **Future Directions**: \\n The real-world data have different characteristics (e.g., memory-centric vs. reasoning-centric). In the future, the MOE (Mixture of Experts) architectures could potentially be designed to incorporate diagonal structures in certain layers and heads, improving both reasoning and memory capabilities.\\n\\n**[R2Q2] [Accuracy Suddenly Improved]** \\nThis is a good question. Understanding the dynamics of how models learn multi-step reasoning is a key direction of our future work. Currently, we lack a rigorous explanation but have observed that this phenomenon is consistent, regardless of changes to learning rate, weight decay, $d_m$, and $d_k$. When the number of epochs is fewer than 50, the model has yet to establish a stable information flow (even on the training data) and appears to rely more on memorization. The cause of this phenomenon remains unclear.\\n\\n**[R2Q3] [OOD Embedding]** \\nIn fact, Transformers do not need to learn embeddings for OOD tokens. Based on our mechanism analysis and random matrix theory, as long as $Ker = I$, it follows that $h(X_{{OOD}}) \\\\approx X_{{OOD}}X_{{OOD}}^T \\\\approx I$, ensuring same-token matching. This is one of the most intriguing aspects of the paper, as the OOD generalization capability effectively characterizes whether the model has truly \\u201cunderstood\\u201d the rules.\\n\\n**[R2Q4] [Parallel Reasoning]** \\nThe parallel reasoning described in Appendix also utilizes the buffer mechanism for information transfer. Here, the buffer mechanism is represented as {$\\\\{\\\\prod_{l \\\\in J} \\\\mathbf{W}^{vo(l)} | J \\\\subset \\\\{0, 1, \\\\ldots, L-1\\\\}\\\\}$}. Figure 12(c) shows that the model still stores different information in different buffers. This section complements the vertical reasoning section and applies to cases where the reasoning steps $n \\\\geq L$ (model depth). It does not conflict with the main text, which focuses on $n < L$. Generally, the depth of a large language model greatly exceeds the reasoning steps required for single predictions, especially with the prevalent use of CoT prompting in modern large models.\\n\\n**[Additional Revisions]** \\nIn addition, we have incorporated feedback from other reviewers by adding experimental studies on CoT and conducting a thorough parameter sweep for the RMBA experiments. We also enriched the explanations of some figures and formulas and clarified ambiguous content to improve the rigor of the paper. Detailed revisions can be found in the common response. Your comments, along with those of other reviewers, have greatly helped improve the quality of our paper.\\n\\n**We look forward to your reply!**\"}", "{\"comment\": \"Thanks for providing the additional experiments and clarifications. I have increased my score to 5 to reflect my greater confidence in the soundness of the experiments.\\n\\nHowever, after the authors' clarification that they do not in fact have solid evidence for the buffer mechanism in Phi-3, I'm slightly concerned about the applicability of the results. \\n\\nSpecifically, it's possible that the buffer mechanism is a product of the particular training set up you used, and they may not emerge in pretrained language models. One reason why this might be true is that pretrained LMs tend to find directions in activation space that correspond to semantic concepts. This view is fundamentally at odds with the buffer mechanism, which says instead that there are multiple replicas of a semantic space. \\n\\nAre there more concrete ways of determining if the buffer mechanism is used in Phi-3? This is the crux for me that will raise my score to a 6 or an 8.\", \"a_minor_point\": \"Sec. J is very sparse on details. In particular, Fig. 24 should probably report quantitative metrics for various predictions, and also some _sense_ of the statistical significance. For example, you might report how frequently are the interventions working as your claim, and how many data points did you test with.\"}", "{\"comment\": \"We greatly appreciate the reviewers' recognition of our work and the recent revisions made! Your numerous suggestions regarding rigor have significantly enhanced the quality of this article. Moving forward, we will continue to prioritize the rigorous articulation of our arguments. Should you have any further questions, please feel free to contact us.\"}", "{\"title\": \"Common Response\", \"comment\": [\"We would like to express our sincere gratitude to the reviewers for their valuable insights and comments, which we have incorporated into our revised manuscript. In response to the reviewers' suggestions, we have devoted considerable time and effort to implement several significant revisions, including the addition of supplementary experimental results and numerous improvements to the overall presentation of the paper. Below are the specific modifications and additions we have made. **The changes in the revised manuscript are marked in red.**\", \"**[R1][Presentation]** We have meticulously refined the captions of every figure and table in the manuscript. The derivations of the matching matrix and kernel matrix, which are crucial to our paper, have been rewritten to eliminate any potential ambiguities present in the original version. Additionally, we have revised some statements in the original text that might have appeared as overclaims.\", \"**[R2][Definition of Formulas]** we have **redefined the kernel score** utilizing the normalization property of the softmax function. The new kernel score now ranges within [0,\\u202f1], making it more interpretable.\", \"**[R3][Supplementary Experiments]** According to the reviewers' suggestions and to enhance the persuasiveness of the paper, we have added the following experimental content:\", \"**Hyperparameter Sweep for RMBA and IMBA Algorithms.** In the appendix of the new manuscript, we have included results demonstrating the impact of different hyperparameters on the outcomes of the three methods. We selected 4 weight decay values, 5 learning rates, and 4 configurations of hidden space dimensions. Under each setting, we conducted 3 experiments for each of the three algorithms (baseline, IMBA, RMBA), **totaling 720 experiments**. In most cases, RMBA achieved better results than the other algorithms. Additionally, we have supplemented the loss curves of the three algorithms on the PrOntoQA task, showing that in terms of training stability, RMBA is comparable to the baseline.\", \"**Training a 2-Layer Transformer with only single-Step Reasoning Data to Perform multi-Step Reasoning via CoT.** In the appendix, we demonstrate how a 2-layer Transformer network, trained with only single-step reasoning data, can use Chain-of-Thought (CoT) to execute multi-step reasoning tasks. We have provided detailed information flow diagrams, rather than the schematic diagrams presented in the main text.\", \"**Calculating Matching Score and Kernel Score in the Phi-3 Model.** We employed the open-source Phi-3 large language model and added methods for calculating the matching score and kernel score in multi-head models. We computed the strength of information interaction between layers of Phi-3 using the kernel score and calculated the matching score of each head in each layer. We found:\", \"**(a)** When $l_1 \\\\leq l_2$, $W^{qk(l_1)} W^{vo(l_2),T}$ has almost no significant features (kernel score \\u2248 0).\", \"**(b)** When $l_1 > l_2$, the kernel score of $W^{qk(l_1)} W^{vo(l_2),T}$ decreases as $l_1 - l_2$ increases. (a)(b) indicate that information stored in earlier layers (with buffer $W^{vo(l_2)}$) is extracted by later layers (via $W^{qk(l_1)}$). Moreover, the model tends to extract the most recently processed information.\", \"**(c)** Heads with high matching scores are mostly concentrated in the middle layers. This aligns with previous research findings that reasoning in large models predominantly occurs in middle layers.\", \"**Conducting More Rounds of Interaction with Large Language Models to Evaluate Their Reasoning Abilities.** We used 4 GPT and Claude models to perform 2-step, 3-step, and 4-step reasoning tasks, examining the models' performance both when allowed to use CoT and when not allowed. For each setting, we repeated the interaction 25 times, **totaling 600 interactions**, to provide statistically significant results. The results indicate that existing large models find it challenging to directly perform 2-step or more reasoning tasks effectively. And with the aid of CoT, these models can excel in multi-step reasoning tasks.\"]}", "{\"comment\": \"We extend our sincere gratitude to the reviewers for their support of our work and for taking the additional time to examine the revisions we have made. Your constructive feedback has significantly contributed to enhancing our manuscript. We respect your final evaluation of our article and wish you continued success in your future endeavors.\"}", "{\"comment\": \"Thank you very much for your recognition of our work and for the further questions you raised. We address the issues related to the Phi-3 model that you mentioned as follows:\\n\\n(1)Firstly, the removal of the absolute phrasing in the original text was not render an information that the \\\"buffer mechanism is not in fact observed in Phi-3.\\\" On the contrary, **our numerous experiments provide substantial evidence that the buffer mechanism highly possibly is used in real large models**:\\n\\n(a)We defined the kernel/matching score of the multi-head model, and the computed scores align well with the positions and characteristics of the inference heads in the model, consistent with findings from previous studies.\\n\\n(b)In the simplest form of multi-step reasoning, i.e., single-step reasoning, we demonstrated that the weight alignment phenomenon mentioned in buffer theory is real and observable.\\n\\n(c)**(new)** Another strong evidence supporting the existence of the buffer mechanism is the computation of cosine similarity between {$W^{vo(l)}$}\\u2019s. We have added an experiment in the revised manuscript to compute cosine similarity, and the result is that $cos sim(W^{vo(l_1)},W^{vo(l_2)}) \\\\approx 0$ when$l_1 \\\\neq l_2$.\\n\\nThese experiments indirectly suggest that the buffer mechanism is likely adopted in real large models. However, as you and other reviewer pointed out earlier, it is nearly impossible to exclude all other potential mechanisms in highly complex real-world large language models. **We removed the absolute phrasing just for the sake of rigor.** The following content is added in the main text:\\n\\n> Phenomena such as weight alignment and $W^{vo(l)}$ diversity provide evidence for the presence of the buffer mechanism in real language models.\\n\\n(2)As suggested by you and other reviewers, the primary aim of this paper is to propose, **through experiments and observations on simple tasks, potential mechanisms that may be employed by Transformer models when performing multi-step reasoning. This helps to shed light on the advantages of the Transformer architecture.**\\n\\n(3)Our work provides **heuristic insights** into how the reasoning capabilities of large language models can be enhanced in the future. For example, the **RMBA**, designed based on the buffer mechanism, significantly improved model performance on the PrOntoQA dataset. Future work could explore **incorporating \\\"qk & vo alignment\\\" or \\\"vo diversity\\\" into the loss function** during training to further boost model reasoning abilities. Alternatively, we consider **developing causal intervention experiments at the buffer level**, as shown in Appendix J, to more precisely probe useful information.\\n\\nRegarding the **causal intervention** experiments, we have provided a detailed flowchart in the appendix and added the following example to explain how causal intervention can be performed:\\n\\n> For example, as shown in Fig.23(right), suppose we want to change the information [a] stored in the buffer $W^{vo(0)}W^{vo(1)}$ in layer 2 to [x]. This can be achieved by simply modifying the input sentence [a\\u2019][b\\u2019][b\\u2019][c\\u2019][a][b][b][c]...[a]to [a\\u2019][b\\u2019][b\\u2019][c\\u2019][x][b][b][c]...[a], and enforcing $Attn^{(1)}{[13:]}$ same as before.\\n\\nFor each scenario, we traversed [40,100] for [x], conducting at least **600 experiments in total** and obtain consistent results, that is, for cases where the table does not indicate \\\"Random,\\\" **the probability of the model output deviating from the value presented in the table is less than 1e\\u221215**. We report this result in the caption as follows:\\n\\n> In this experiment, the tokens in the original sentence are selected from the range [20, 40], while token [x] traverses the range [40, 100]. $Attn^{(l)}{[13:]}$ refers to the attention score corresponding to the last token at layer $l$. For the original input, we have $ Attn^{(1)}{[13,6]} = 1$ and $ Attn^{(2)}{[13,8]} = 1$. Instances labeled as \\\"Random\\\" indicate that the output varies erratically as [x] changes. In all other cases, the probability of the model output deviating from the value presented in the table is less than 1e\\u221215.\\n\\nThe corresponding third-round modifications are highlighted in the revised manuscript using green text. **We sincerely appreciate the valuable feedback provided by the reviewer and deeply value your thoughtful insights. We also hope to continue receiving your support in advancing this original work. Should you have any further questions or suggestions, please feel free to reach out to us at any time. We would be happy to continue the discussion.**\"}", "{\"comment\": \"Thank you for your constructive comments on our paper. Below are our detailed responses to each of your points:\\n\\n**[R4W1] [Insufficient Explanations of Equations and Figures]**\\n\\nWe have thoroughly revised the captions of every figure and table in the paper, adding detailed explanations to the flowcharts and supplementing the experimental figures with conclusions. Additionally, we have rewritten some paragraphs that might have been ambiguous to enhance readability. Due to the extensive nature of these changes, please refer to our revised manuscript for specific modifications.\\n\\n**[R4Q1] [75% Time Saving]**\\n\\nWe apologize again for the lack of detailed explanation. As shown in Figure 8(b), after approximately one epoch, the accuracy of the baseline model rises above 95%, while the RMBA0.05 model reaches nearly the same accuracy after only about 0.25 epochs. Therefore, it saves about 75% of the training time. We have added the corresponding explanation in the new manuscript.\\n\\n**[R4Q2] [RMBA Stability]**\\n\\nAccording to our current experimental results, RMBA does not appear to have significant stability issues. Firstly, in Figure 8(c), we show the results after adding perturbations to different models; the experiments indicate that the RMBA algorithm has stability comparable to the baseline. We have also added the training loss curves of the PrOntoQA task in the appendix of the new manuscript, which also support our viewpoint.\\n\\nFrom a theoretical perspective, RMBA can be understood as a special form of residual connection (just multiplied by a constant matrix), and residual connections are known to promote stability. We speculate that this may be one reason why RMBA does not lead to decreased stability.\\n\\n**[R4Q3] [Recommendations]**\\n\\nWe will comprehensively revise the figure captions according to your suggestions.\\n\\n**[Other Modifications]**\\n\\nIn addition, we have incorporated suggestions from other reviewers, adding visualization experiments of the matching score in the Phi-3 model, experiments using a 2-layer Transformer combined with CoT to achieve multi-step reasoning, and conducted detailed parameter sweeps for the RMBA experiments. We have also rewritten content in the paper that might cause ambiguity, enhancing the rigor of the manuscript. Specific changes can be found in the common response. Your comments, along with those of other reviewers, have greatly helped improve the quality of our paper.\\n\\n**We look forward to your reply!**\"}", "{\"title\": \"Updates\", \"comment\": \"Thank you for your comments and the changes you made. I have updated my scores according to it.\"}", "{\"summary\": \"This paper analyzes the internal reasoning mechanisms of Transformers, from the perspective of vertical and horizontal thinking on a symbolic multi-step reasoning dataset. They analyze the concept of Buffer Mechanism, by which the model stores information and retrieves it through the query-key matrix. They explain how the model is leveraging this mechanism enhance the model\\u2019s multi-step reasoning capability. Finally, they show how their method reduces the cost of generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Insightful Findings. The paper presents interesting findings on how the Transformers make use of the Buffer Mechanism.\\n2. Great Contribution. RMBA show that the authors can leverage the Buffer Mechanism to provide better efficiency. \\n3. Rigorous experimentation. The analysis and experiments show a careful detail and understanding of the processes.\", \"weaknesses\": \"1. Poor Presentation: The paper is hard to read in general. There are few explanation to the equations and Figures. The presentation of the work does not follow a good format. I believe the paper can have a much greater impact if the more attention is given to the paper writing and presentation.\\n2. Presentation of experiment results: The paper does clearly define the metrics to compare the results from IMBA and RMBA, which makes it hard to understand at first reading. For example, in Figure 7, they show that the IMBA model is unable to achieve a good test accuracy, but they do not quantify it or provide any further analysis. \\n3. Better Math Descriptions: Equations and Formulas are introduced without a former introduction and explanation.\", \"questions\": \"Some questions about the paper:\\n1. Can you explain why you claim 75% when compared to the baseline? I see the plots on Figure 8, but I cannot see the numbers to quantify this generalization myself. \\n2. Does the RMBA have some drawback during training? I assume the training process might be more unstable due the randomness of the matrix, is such behavior observed?\\n\\nIn general more clarity and better description of the experiments can help understanding the paper better. I miss a clear structure of the paper to understand it better the first time.\", \"some_recommendations\": [\"Figure 2: It is missing an explanation of what (a) and (b) figures represent and what are the takeaways from those figures.\", \"Figure 5,6: Both are hard to read and not display the axes. I can deduct their meaning after reading the paper. But more clarity can help other readers.\", \"Figure 4: How does it provide a better understanding of the analysis?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer, we would like to thank you again for your valuable feedback during the review process. Today is the final deadline for submitting the revised manuscript, and we are concerned that you may not have noticed the changes we made in response to your previous comments.\\nOver the past period, we have engaged in discussions with other reviewers, and based on your and their suggestions, we have made further revisions and expansions to the manuscript. Specifically, we have added **9 pages** of appendix and made detailed updates to the main text. Here is a brief summary of the key changes we made:\\n\\n> 1.We have added Appendix G to present the detailed inference processes of Chain-of-Thought (CoT) reasoning. Additionally, **we report the accuracy of the model trained with 1-step reasoning data on tasks requiring 1 to 6 steps of reasoning.**\\n\\n> 2.We have conducted experiments on the **real language model Phi-3**, providing three strong pieces of evidence to support the high likelihood that the buffer mechanism is adopted by large models: (1) the weight alignment phenomenon, (2) the reasoning head positions identified using the matching score, which coincide with findings from previous studies, and (3) the near orthogonality of the {$W^{vo(l)}$} matrices across layers in Phi-3.\\n\\n> 3.We have performed **hyperparameter sweep for the RMBA-related experiments**, conducting a total of 720 experiments to comprehensively validate the role of RMBA in enhancing the reasoning capabilities of models on symbolic tasks.\\n\\n> 4.**(new)** We have added Appendix J: Causal Intervention. Through **over 600 causal intervention experiments**, we aim to rule out the possibility that Transformers are using alternative mechanisms when learning from symbolic data.\\n\\n> 5.**(new)** We have revised over-claimed sentences in the main text to **enhance the rigor and precision of the paper**.\\n\\nIn addition, the other three reviewers have given positive feedback on the improvements we made and have accordingly increased their ratings. **We sincerely hope that you can further support our work and would appreciate it if you could let us know whether you are satisfied with our responses to your previous questions.** If you have any additional concerns or suggestions, we would be more than happy to discuss and make any further adjustments. Thank you once again for your time and thoughtful feedback.\"}", "{\"comment\": \"I acknowledge the changes in the paper and the additional experiments. I will therefore reconsider my final score accordingly, after a final read of the current version today.\"}", "{\"comment\": \"Dear Reviewer, we would like to thank you again for your valuable feedback during the review process. As the discussion phase is nearing its conclusion, this message is to confirm whether you have seen our previous response (about phi-3 and causal intervention) and if you are satisfied with the updates we provided.\\n\\nIn our last response, we included several new pieces of evidence supporting the existence of the buffer mechanism in the Phi-3 model. Additionally, we acknowledge your concerns regarding the semantic space in real language models. In fact, previous works have shown that the semantic space of large models is often a low-dimensional manifold. For example, through 2D PCA analysis, we can observe clustering structures within the embedding matrix. However, generally speaking, both token embeddings and hidden spaces in large language models are high-dimensional vectors. This suggests that in high-dimensional vector spaces, it is natural for large models to employ buffers to store different attributes of an entity, thus creating multiple copies of semantic spaces. This ensures that the sum of two semantic vectors will not lead to a loss of the original meaning. A simple analogy is as follows: if the sum of two positive integers is known to be 5, we cannot determine whether the addends are (2,3) or (1,4). However, if the sum of two 2-dimensional addends, where only one element is non-zero, is [1,4], we can easily determine that the addends are [1,0] and [0,4], i.e., [1,4] = 1*[1,0] + 4*[0,1] (here, the two buffers are [1,0] and [0,1]).\\n\\n**We sincerely appreciate your contributions to this work over this period, and we hope to receive your continued support.**\"}", "{\"comment\": \"Thanks for adding the experiments with the cosine similarity on $W^{ov}$ matrices for Phi-3.\\n\\nI still believe that this line of weight alignment argument is insufficient for establishing that Phi-3 exhibits the buffer mechanism. The weight alignment phenomena you're highlighting in Sec. H seems extremely general because it's a property of the Phi-3 weights, and not a property of any particular symbolic reasoning task you're studying for the main paper. This makes the hypothesis space much larger, because there could be many training dynamics considerations that could lead the formation of this weight alignment pattern.\\n\\nIf you can propose a concrete task on which Phi-3 exhibits the buffer mechanism, it will greatly improve the relevance and impact of the paper. Such evidence can come in the form of analyzing the representational role of activations, or analyzing behaviors of attention heads in this concrete task. I believe that if this is rigorously shown in an LLM, this paper can be really important.\\n\\nOverall, while I appreciate the authors' efforts at improving the experimental rigor, I still believe that the the paper's contributions are limited because the authors made the choice to focus their analysis on synthetically trained models on toy synthetic datasets, rather than an LLM. I will therefore choose not to increase my score.\"}", "{\"comment\": \"Thank you for the response and the additional analysis. I appreciate the additional results about lateral reasoning in Appendix G.\\n\\n> unlike vertical thinking where the model needs to form multiple buffers, when using CoT, the model repeatedly overwrites the same buffer to achieve multi-step reasoning. \\n\\nI see you are mentioning this in Lines 401-402 too. Can you clarify what evidence are you using to support this claim? My understanding is that we can conclude this because a model trained to perform on 13-step is able to correctly predict the next tokens after the 13th token (Figure 18). I think this should be made clear in the main text.\\n\\nAlso, can the results illustrated in Figure 18 be quantified? E.g., with what accuracy does the model predict the tokens after the 13? Are the attention scores represented by the thickness of the lines average over multiple examples or are they for a single example?\", \"minor_point\": \"The tile of Appendix G reads \\\"Detials\\\" instead of \\\"Details\\\"\"}", "{\"comment\": \"We appreciate the reviewer's thorough examination of the numerous changes we have made and for providing further suggestions. Below are our detailed responses to each of your recommendations; the corresponding revisions are highlighted in blue in the new manuscript.\\n\\n**[Evidence to support claim]** The inner mechanism of CoT is actually identical to the mechanism of the first reasoning step in the vertical thinking strategy. In the new manuscript, we have added Figure of the CoT matching matrix in Appendix G to make our conclusions more convincing. In the main text, we have incorporated the content you mentioned about length generalization. Specifically, we have added the following(bold part):\\n\\n>*We trained a 2-layer Transformer with the **13-length** single-step reasoning data. During the testing phase, we fed the model's output back into the model. Through this CoT process, the model can perform 2-step, 3-step, or even higher-step reasoning, **and it can also generalize to sentence lengths beyond the 13th position.***\\n\\n**[Quantify Figure 18]** This is an excellent suggestion. In Appendix G of the new manuscript, we have **added a figure (Figure 20) to quantify the accuracy of the network trained with single-step reasoning data when using CoT to perform multi-step reasoning.** The new results show that when using CoT, the model achieves an accuracy of 84.1% on 3-step reasoning tasks. Even for the most complex 6-step reasoning, it achieves an accuracy of 57.6%.\\n\\n**[Attention scores in Figure 18]** The thickness of attention lines shown in Figure is for a single example.\\n\\n**Undoubtedly, your revision suggestions during this period have greatly helped us enrich the content of the CoT section. We look forward to your reply again!**\"}", "{\"metareview\": \"The paper introduces a buffer mechanism in transformer models for multi-step reasoning tasks and proposes a Random Matrix-Based Algorithm (RMBA) to improve training efficiency. While the study offers insights into reasoning processes in transformers, it has significant weaknesses. The reliance on synthetic tasks and simplified datasets limits the generalizability of findings to real-world large models. Experimental evidence for the buffer mechanism in practical language models remains insufficient, and key claims about the mechanism\\u2019s role are overly assertive without rigorous empirical validation. The evaluation lacks direct comparisons to alternative hypotheses, such as overwrite mechanisms, reducing the robustness of the conclusions.\\n\\nStrengths include addressing an important problem, novel theoretical insights, and detailed experimental analysis on synthetic datasets. However, these are outweighed by the lack of generalizability, insufficient experimental rigor, and reliance on synthetic benchmarks.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal addressed some concerns about clarity and added experiments on the Phi-3 model, but it failed to provide compelling evidence of the buffer mechanism\\u2019s existence in real-world settings. Reviewers were unconvinced by the generalizability of the findings and highlighted the limited scope of experiments. Claims about CoT reasoning and buffer mechanisms remain speculative without concrete empirical support. While the authors made substantial revisions and improved the clarity of the manuscript, the fundamental limitations in scope and validation justify the rejection decision.\"}", "{\"comment\": \"Thank you for the constructive feedback on our manuscript. Below are our responses to each of your comments:\\n\\n**[R2W1] [Overclaim]** We appreciate the reviewer pointing out the overclaim issues in our writing. We have revised all instances of overclaim identified in our manuscript. Specifically, we have added the qualifier *\\\"symbolic multi-step reasoning tasks\\\"* to each relevant claim.\\n\\n**[R2W2] [Circular Logic]** \\nThe ambiguity in this section was due to our inadequate phrasing; in fact, there is no circular reasoning involved. We have revised the relevant section in the new manuscript as follows: \\n*To achieve the same-token matching, it is sufficient for $\\\\text{Ker}^{(l)} \\\\approx I$, in which case* \\n$$h^{(l)}(X_{tgt}) \\\\approx X_{tgt}X_{tgt}^T = I + O(1/\\\\sqrt{d_m}).$$\\n*This equation indicates that the attention of all tokens is focused on themselves. Furthermore, we observe that same-token matching is independent of the specific value of $X_{{tgt}}$. For example, for $X_{{tgt, OOD}}$ sampled from untrained random vectors $token_{OOD}$, $h^{(l)}(X_{{tgt, OOD}}) \\\\approx I$ still holds. Therefore, when the model weights satisfy $Ker^{(l)} \\\\approx I$, the model demonstrates out-of-distribution generalization capability.*\\n\\n**[R2W3] [Minor Things]** \\nWe conducted more detailed interactions with various large models. Specifically, we used four GPT and Claude models to perform 2-, 3-, and 4-step reasoning tasks, examining their performance with and without Chain-of-Thought (CoT) prompting. Each setup was repeated 25 times, **totaling 600 interactions**, to ensure statistically robust results. The findings show that current large models struggle to perform 2(or higher)-step reasoning tasks directly. However, with CoT prompting, all these models excel at multi-step reasoning. Based on these results, we have revised Figure 1 and included detailed interaction results in the Appendix. Additionally, we have corrected the typographical error in Figure 2.\"}", "{\"comment\": \"**[Contribution]**\\n\\nIn our initial draft, we unintentionally made several overclaims. With the assistance of the reviewers, we believe the rigor of our manuscript has significantly improved, and we are committed to addressing any further issues of rigor that may remain.\", \"our_work_is_not_centered_on_the_claim\": \"\\u201cWe discovered how transformers do reasoning!\\u201d Rather, our goal is to present evidence of a mechanism that Transformers might use for reasoning. (Interestingly, our original title was \\u201cToward Understanding How Transformers Deal with Multi-Step Reasoning\\u201d. While we changed the title to avoid overclaiming, we still overlooked the distinction between \\u201cThe\\u201d and \\u201cA\\u201d in English.)\\n\\nBeyond the issue of overclaiming, we would like to restate the key contributions of our work:\\n\\n(1)We propose a potential buffer mechanism and provide theoretical characterization and experimental analysis of this mechanism on a symbolic dataset.\\n\\n(2)Using the buffer mechanism and symbolic dataset, we differentiate between vertical and horizontal reasoning strategies. From the perspective of buffers, we offer a plausible explanation for why large models with CoT strategies often excel in reasoning tasks.\\n\\n(3)We utilize the buffer mechanism to explain why Transformers exhibit OOD token generalization capabilities and why IMBA fails on multi-step reasoning datasets.\\n\\n(4)With the understanding of buffer mechanism, we propose the RMBA, which significantly improves learning efficiency on the PrOntoQA task.\\n\\n(5)In the Phi-3 model, we observe weight alignment phenomena that seem to support the existence of the buffer mechanism.\\n\\nWe are sincerely grateful for the reviewer\\u2019s suggestions regarding our overclaim issues. We firmly believe that through our collaborative efforts, this article has been greatly enhanced in value. **Given that we have carefully revised all known overclaim issues, we would deeply appreciate it if the reviewer could reassess the substantive contributions of our work. Your evaluation is truly important to us!**\"}", "{\"comment\": \"I think the additional experiments and analyses carried out by the authors made the paper a stronger submission. I have raised my overall rating to reflect these improvements.\"}", "{\"comment\": \"We would like to express our sincere gratitude for your support of our work. We also appreciate your constructive suggestions for improving our manuscript. We are pleased to hear that the additional experiments and analyses we recently conducted have enhanced the overall quality of the paper. We are deeply thankful for your willingness to consider increasing the score for our work.\"}" ] }
5KqveQdXiZ
Solving Differential Equations with Constrained Learning
[ "Viggo Moro", "Luiz F. O. Chamon" ]
(Partial) differential equations (PDEs) are fundamental tools for describing natural phenomena, making their solution crucial in science and engineering. While traditional methods, such as the finite element method, provide reliable solutions, their accuracy is often tied to the use of computationally intensive fine meshes. Moreover, they do not naturally account for measurements or prior solutions, and any change in the problem parameters requires results to be fully recomputed. Neural network-based approaches, such as physics-informed neural networks and neural operators, offer a mesh-free alternative by directly fitting those models to the PDE solution. They can also integrate prior knowledge and tackle entire families of PDEs by simply aggregating additional training losses. Nevertheless, they are highly sensitive to hyperparameters such as collocation points and the weights associated with each loss. This paper addresses these challenges by developing a science-constrained learning (SCL) framework. It demonstrates that finding a (weak) solution of a PDE is equivalent to solving a constrained learning problem with worst-case losses. This explains the limitations of previous methods that minimize the expected value of aggregated losses. SCL also organically integrates structural constraints (e.g., invariances) and (partial) measurements or known solutions. The resulting constrained learning problems can be tackled using a practical algorithm that yields accurate solutions across a variety of PDEs, neural network architectures, and prior knowledge levels without extensive hyperparameter tuning and sometimes even at a lower computational cost.
[ "Constrained learning", "partial differential equations", "neural operators", "physics-informed neural networks" ]
Accept (Poster)
https://openreview.net/pdf?id=5KqveQdXiZ
https://openreview.net/forum?id=5KqveQdXiZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z2T8Oj3JUq", "xnvwKW5JF1", "xD7Q6iIb80", "we0F468dBa", "w0UoajrvpY", "rYAJimiVM6", "rQrImM1wYn", "pmoS3Fiqeo", "nwalanfHra", "nqNqUT5mbS", "m0uegg7Smr", "kKKaZ6djEY", "jvvew5cag9", "fZ40O6UG44", "dqVc7gJXu2", "cnMHX6t0tB", "bpChj7JtAp", "bBTwRYHFsC", "ZXJN0i31XE", "Z4h5UPs1pV", "VjyEPagSyL", "U3FMDnknYO", "TzW99ehJM0", "RIpTc7j2cK", "QluYIAcVYq", "OIFZ1JLjy7", "MByE7IMG8u", "Jz9vd0xL0q", "JleoYk6V6i", "Jc71qVpwVP", "ItBww4kkqg", "IU0KNu5fFe", "I1VIVMf8aW", "GW5uOjYbXD", "GP97xm546s", "EWAJXXkQxh", "E68NWJHG4l", "D7akZIZBht", "ClQ9hML42n", "7SDpXvTnfj", "5EQAAM5Sts", "0PtoU9fRjl" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732211827901, 1732208756993, 1732208813091, 1733155410965, 1732209481426, 1729879173977, 1730313461878, 1732209568208, 1732209599294, 1732209852061, 1732260816914, 1732553924587, 1732209812856, 1732209839000, 1732209953801, 1732209973703, 1732211790511, 1732209876790, 1732209014295, 1734726288521, 1732209468053, 1732208965982, 1732209545791, 1730402445826, 1732553832664, 1732209898925, 1732553676627, 1732209631859, 1732209788276, 1732209922078, 1732211768905, 1729398537902, 1732553817599, 1732211907227, 1733038082575, 1737523644037, 1732208790126, 1732211864477, 1732209036695, 1732209520214, 1732553849745, 1732211849460 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_4CoR" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_x438" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_RnPx" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Area_Chair_V8QT" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_JwZx" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_RnPx" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Reviewer_RnPx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ], [ "ICLR.cc/2025/Conference/Submission4492/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> **W3:** The guarantee itself centers on sampling from \\\\psi_\\\\alpha which, as mentioned in the paper, becomes difficult exactly when the error bound becomes tight. Using Metropolis--Hastings essentially throws that guarantee away unless you can prove, e.g., uniform ergodicity of the resulting Markov chain. So in the end we are faced with what looks like just another difficult integral, just with a different form and a lack of a guarantee. Note that one could raise the same criticism of the original adversarial robustness result: sampling from little regions around the worst case inputs is just as difficult as finding them in the first place.\\n\\n**Response:**\\nNote that we only sample from the bounded, closed domain $\\\\bar{\\\\Omega} \\\\times \\\\Pi$. Hence, the target $\\\\psi_\\\\alpha$ has finite tails for any $\\\\alpha$, i.e., it has a moment generating function, satisfying sufficient conditions for uniform ergodicity (see, e.g., [Jarner & Hansen, \\\"Geometric ergodicity of Metropolis algorithms,\\\" 2000]). Additionally, we prove in Prop. 3.1-3.2 that the $\\\\psi_0$ in Alg. 1 (and more generally, $\\\\psi_\\\\alpha$) are square-integrable, i.e., belong to $\\\\mathcal{P}^2$. Thus, alternative modern sampling technique, such as Langevin Monte Carlo, could be used in Alg. 1 to improve its performance (albeit at a higher computational cost). This is an important point and we will include a note to this effect in Sec. 4.\\n\\nThe reviewer has a point that Alg. 1 approximates the solution of (SCL). That is the case of every numerical algorithm, from optimization to traditional PDE solver. Contrary to previous approaches, however, we show both that (SCL) is the *right problem* to solve, in that it provides a weak solution of the BVP (Prop. 3.2), and that Alg. 1 approximately solves it. Previous approaches do not solve (or even attempt to solve) (SCL), therefore failing to find a weak solution altogether.\"}", "{\"comment\": \"> **W1:** There are several different background techniques described in the paper, in addition to the new technique. Therefore, I think it would be beneficial to the readability of the paper to include a clear list of the scientific contributions of SCL to help the reader.\\n\\n**Response:**\\nWe thank the reviewer for their positive assessment of our work. The contributions of this paper are summarized in the last paragraph of the introduction (lines 54-65), but we list them here more explicitly:\\n\\n- prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2);\\n- incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2);\\n- develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yield trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4);\\n- illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nWe point out that the result in Prop. 3.2 is key as it shows that it is not enough to use either worst-case losses as in (Wang et al., 2022a) or constrained formulations as in (Lu et al., 2021b; Basir & Senocak, 2022). Both are needed to find solutions of BVPs. Indeed, note that using a constrained formulation with fixed collocation points does not recover a solution for a simple PDE (Fig. 2), while Alg. 1 does (Table 1).\\n\\nWhat is more, Prop. 3.2 establishes the limitations of learning approaches that can only determine very smooth solutions for high-dimensional state spaces (essentially, solutions in the Sobolev space $W^{(d+1) / 4, 2}$). This can be an issue for large-scale dynamical systems, such as those found in smart grid applications, or when transforming higher-order PDEs in higher-dimensional first-order systems.\"}", "{\"comment\": \"> **W3:** One of the main claims for the new method is that it is not sensitive to the balancing of weights for the different components of the loss function. This is well justified by the use of the dual formulation. However this is not mentioned until the end of the 4th page. Due to the importance of this aspect of the solution, I would recommend mentioning it earlier in the paper.\\n\\n**Response:**\\nThe reviewer is correct that this is one of the main benefits of our constrained learning approach. Using duality is fundamentally different from existing loss balancing methods that rely on various heuristics (listed in our literature review in Appendix F). We hope that the addition of the explicit list of contributions above will make this clearer to the reader from the beginning. We will also mention this fact more explicitly in the abstract.\"}", "{\"comment\": [\"We thank the reviewer for revisiting their assessment and for the additional feedback. We address their latest comments below in the hope that they will feel more positive about the paper after these clarifications. We also thank them for spotting our typos: we will proof-read the full manuscript again after we have finishing our camera-ready edits.\", \"**Error bars:** We completely agree. As we mentioned in our response, we continue to run experiments to include error bars on all of our results. We provided only preliminary results due to time and compute limitations. Though our contributions are indeed mainly theoretical, we believe that they may also have an impact in practice (as corroborated by our experiments).\", \"**(Lines 161-163) \\\"...they are the wrong problems in the first place.\\\":** We completely agree with the reviewer and we in no way believe (or claim) that previous approaches are \\\"wrong.\\\" As we state in the manuscript, (PI) and (PII) can and have successfully found solutions to many PDEs (see both our experiments in Sec. 5 and related work in App. G) and have been \\\"effective in many applications\\\" (line 147).\", \"Our statement is that they target the \\\"wrong problem\\\" because \\\"regardless of how the weights $\\\\mu$ in (PI) are adapted [...], it *need not* provide a solution of (SCL)\\\" (line 289, our emphasis). In other words, (PI) is an optimization problem that is *not guaranteed* provide a (weak) solution of (BVP). In contrast, \\\"by (approximately) solving (PIII) [...], we indeed (approximately) solve (BVP)\\\" (line 204). Indeed, there are no impossibility results for (PI), but neither are there guarantees [in contrast to (PIII)].\", \"We will clarify those statements to ensure we do not suggest claims we do not prove.\", \"**Lines 146-170:** We address the known limitations of current approaches by proving (in the now Prop. 3.1) that the optimization problem that solves (BVP) is (PIII). We would certainly not say this is a \\\"new paradigm,\\\" it is well within the general ML approach. What we argue is that overcoming the issues of previous methods requires revisiting which ML problem is being solved, i.e., that \\\"the challenges faced by previous NN-based BVP solvers arise not because of *how* (PI) and (PII) are solved,\\\" but because to solve (BVP) we actually need to solve (PIII).\", \"**(Lines 207-210):** We will expand on that remark noting that for $\\\\psi \\\\in W^{k^\\\\prime,2}(\\\\mathcal{D})$ (and under the smoothness restrictions stated in the theorem) the worst-case loss essentially recovers (1).\", \"**How to choose $\\\\alpha$?** In Sec. 4, we explain that we take $\\\\alpha=0$ in Alg. 1 because \\\"the $\\\\psi_\\\\alpha$ are smooth, fully-supported, square-integrable distributions for $\\\\alpha = 0$. They are therefore amenable to be sampled using [...] MCMC\\\" (in our case, the MH algorithm). We find this approximation (guaranteed by Prop. 4.1) to be good enough in our experiments, so that the use of \\\"algorithms adapted to discontinuous distributions (e.g., (Nishimura et al., 2020)) to enable [...] better approximations (increase $\\\\alpha$) is left for future work\\\" (lines 344-351).\", \"**(Lines 252-355)** The reviewer has a point. We are missing a reference to our own experiments (Sec. 5). As we explain in App. E, we replace $\\\\psi_\\\\alpha$ by the uniform distribution for the BC objective in all of our results.\", \"**(Lines 323-328):** We are glad that our response and revisions made the distinction between the empirical dual problem (and the primal-dual method in Alg. 1) and its distinction to previous approaches (PI) clearer. This is one of the fundamental point of our work, so we appreciate the reviewer's feedback in improving its presentation.\"]}", "{\"comment\": \"> **W1 (continued)**\\n\\nWith respect to the specific papers mentioned by the reviewer, they are (Daw et al., 2023) and (Wang et al., 2024) in our related work (Appendix F). Explicitly:\\n\\n- **(Daw et al., arXiv:2207.02338)** is a preliminary version of (Daw et al., 2023) that proposes *R3* which we use as a benchmark in our experiments (Table 1). R3 uses a sort of rejection sampling approach to approximate the worst-case loss for PDE residuals. The first important distinction is that, as we prove in Prop. 3.2, this is not enough to yield a weak solution of BVPs (see eikonal in Table 1). Doing so requires solving a *constrained* problem involving worst-case losses, namely (SCL). Alg. 1 does that by leveraging MCMC techniques (namely, Metropolis-Hastings) and duality. Aside from Metropolis-Hastings mixing faster than rejection sampling (Robert & Casella, 2004), our approach can accommodate other modern sampling technique (e.g., Langevin Monte Carlo). Indeed, we prove in Prop. 3.1-3.2 that the $\\\\psi_0$ in Alg. 1 (and more generally, $\\\\psi_\\\\alpha$) are square-integrable, i.e., belong to $\\\\mathcal{P}^2$.\\n\\n In our comparisons to R3 (Sec. 5.1), we obtain very similar results in general, except when R3 fails to recover a solution (see eikonal in Table 1). Our method continues to provide good results in those cases, aside from being adapted to other settings, such as solving parametric BVPs (Sec. 5.2) and interpolating existing solutions (Sec. 5.4).\\n\\n- **(Wang et al., arXiv:2203.07404)** is a preliminary version of (Wang et al., 2024) which separately weights the PDE residuals within different time windows. The weights are adjusted to respect some form of temporal causality during training, an idea that has also been explored in, e.g., (Krishnapriyan et al., 2021). We note that we do not **need** to account for such form of causality: the fact that (SCL')(M) tends to focus on fitting the solution at earlier times first (e.g., Fig. 1b) is a consequence of the training dynamics of Alg. 1. What is more, by sampling from $\\\\psi_0$ Alg. 1 adapts to the model fit in both time and space. This is particularly relevant for BVPs that do not have a clear notion of causality (e.g., Helmholtz equation in Fig. 10).\\n\\n- **\\\"A unified scalable framework for causal sweeping strategies for ...\\\"**: we assume the reviewer refers to\\n\\n [Penwarden et al., \\\"A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositions,\\\" Journal of Computational Physics, 2023]\\n\\n If this is not the case, please let us know. Once again, the distinction is that we do not enforce any form of causality during training nor do we need to. The fact that (SCL)(M) first focuses on fitting the solution at earlier times (as in Fig. 1b) is a natural behavior of Alg. 1 and not one that we encourage in any way. As we mention above, this is particularly important for BVPs that lack a clear notion of causality (e.g., the Helmholtz equation) or when solving parametrized families of BVPs (it is not clear which parameter value should we fit first, as in Fig. 6). Alg. 1 adapts to the underlying needs of the problem without any modification.\\n\\n\\nWe appreciate the reviewer bringing (Penwarden et al., 2023) to our attention, especially since the architectures based on domain decomposition it reviews can be used as $u_\\\\theta$ in (SCL) to tackle problems with discontinuous solutions or improve computational complexity. We will include it and expand our remarks on the other works in our literature review (Appendix F).\"}", "{\"summary\": \"Neural network-based approaches, such as physics-informed neural networks and neural operators, offer a mesh-free alternative by directly fitting those models to the PDE solution. They are known to be highly sensitive to hyperparameters such as collocation points and the weights associated with each loss. This paper addresses these challenges by developing a science-constrained learning (SCL) framework. It demonstrates that finding a (weak) solution of a PDE is equivalent to solving a constrained learning problem\\nwith worst-case losses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The strength of the paper is their proposed solution to the hyperparameter issue: They develop a science-constrained learning (SCL) framework. It demonstrates that finding a (weak) solution of a PDE is equivalent to solving a constrained learning problem\\nwith worst-case losses.\\n\\nThe paper provides ways of incorporating structural knowledge, observational knowledge, and science-contrained learning. The algorithm is easy to follow, and the mathematical proofs are helpful in terms of understanding the benefits of the method.\", \"weaknesses\": \"The reviewer is having a difficult time distinguishing the contribution from the work of Paris Perdikaris and his group. Examples are:\\n\\nA. Daw, J. Bu, S. Wang, P. Perdikaris, A. Karpatne, Rethinking the Importance of Sampling in Physics-informed Neural\\nNetworks, arXiv preprint arXiv:2207.02338\\n\\n S. Wang, S. Sankaran, P. Perdikaris, Respecting causality is all you need for training physics-informed neural networks,\", \"arxiv_preprint_arxiv\": \"2203.07404\\n\\nParis references in one of his talks a paper by this former advisor and colleagues \\\"A unified scalable framework for causal sweeping strategies for ...\\\" that has a categorization (and references). \\n\\nThe important point is that the algorithm in the current paper (Algorithm 1) has many of the same characteristics you find in Paris' work which you incorporate learning the hyperparameters into the learning.\\n\\nAt some point, the reviewer is wondering how common this is now considered when application papers now reference the constrained paper (NIPS) as say that they use PINNs + Constrained learning:\", \"https\": \"//www.sciencedirect.com/science/article/pii/S1385894724012919#b78\\n\\nIs the current paper really 'novel' (i.e. a contribution) if others now consider things commonplace. The reviewer is open to being convinced of the contributions if the literature search were broader and the comparisons where against the various adaptive sampling methods mentioned by Paris and others.\\n\\nIn terms of concrete actions, can the authors:\\n+ Provide a more comprehensive literature review that clearly positions their work relative to existing constrained learning approaches for PINNs?\\n\\n+ Explicitly discuss how their method differs from or improves upon other adaptive sampling techniques, particularly those by Perdikaris and others (as highlighted by the question below)?\\n\\n+ Highlight any novel theoretical results or empirical improvements over state-of-the-art methods\", \"questions\": \"The specific questions (associated with the items above) are:\\n\\n+ Can the authors clarify how their approach (for instance Algorithm 1 updating schedule) compares to the update schedule in Paris' paper: https://epubs.siam.org/doi/pdf/10.1137/20M1318043 Section 2.4 (not the specific updates, but the general updates as given by the spirit of the logic presented)? In others of Paris' papers, he points to : https://arxiv.org/pdf/2007.04542. How does the new method compare to the \\\"adding weights to the loss function\\\" section 2.2.1 (Which is analogous to Paris' work also)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics issues\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper develops a technique called ``Science constrained learning'' in order to address some of the issues with Physics Informed Neural Networks (PINNs) by allowing structural constraints and known solutions to be taken into account in addition to the usual constraints considered by PINNs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Overall the paper is well written and formatted and the development of Science Constrained Learning (SCL) is well described. There are several different background techniques described in the paper, in addition to the new technique. I think this work is worthy of inclusion in the conference and I am leaning towards an acceptance.\", \"weaknesses\": \"There are several different background techniques described in the paper, in addition to the new technique. Therefore, I think it would be beneficial to the readability of the paper to include a clear list of the scientific contributions of SCL to help the reader. I note that the paper is a full 10 pages long, however I think that some of the background could be shortened somewhat (especially in the introductory sections) in order to give the description of the new technique a bit more room to breathe.\\n\\nOne of the main claims for the new method is that it is not sensitive to the balancing of weights for the different components of the loss function. This is well justified by the use of the dual formulation. However this is not mentioned until the end of the 4th page. Due to the importance of this aspect of the solution, I would recommend mentioning it earlier in the paper. Otherwise the reader is left wondering what the difference is to e.g.\\\\ the work in [1], which uses derivative constraints and empirical observations. \\n\\nThe use of the Lagrangian does indeed support the idea that the method addresses the issue of balancing the weights in the loss function which can affect PINNs. However the paper also makes the statement that this new methodology addresses sensitivity to the number of training points. I cannot really see a clear explanation in the paper for why this might be the case. Either this explanation should be brought out more and clarified in the paper or, alternatively, it should be explained that this is an empirical observation.\\n\\n[1] Kentaro Hoshisashi, Carolyn E Phelan, and Paolo Barucca. No-arbitrage deep calibration for volatility smile and skewness. arXiv preprint arXiv:2310.16703, 2023.\", \"questions\": \"I would be very pleased to hear the authors' response to my comments above, especially with respect to the proof of the reduction in conditioning number in Appendix B.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **W5:** Explicitly discuss how their method differs from or improves upon other adaptive sampling techniques, particularly those by Perdikaris and others (as highlighted by the question below)?\\n\\n**Response:**\\nAs we noted before, Alg. 1 approximates worst-case losses (such as the PDE residuals) by using MCMC techniques (namely, Metropolis-Hastings). This is justified by Prop. 3.1 which proves that sampling from $\\\\psi_0$ is an explicit approximation of the worst-case loss. Such an approximation guarantee does not exist for any of the adaptive sampling techniques mentioned by the reviewer or in our literature review (see Appendix F). Prop. 3.2, however, proves that adaptive sampling techniques are not enough to obtain weak solutions of (BVP). Indeed, they must be combined with a constrained formulation as in (SCL). The adaptive sampling papers mentioned by the reviewer lack such guarantees as they focus on weighted loss formulations of the form (PI).\\n\\nObserve that we directly compare to R3 from (Daw et al., 2023), where it was shown to outperform many of the adaptive sampling techniques mentioned by the reviewer and in the paper (Sec. 5.1). As we noted in W1, our results are generally similar to R3, except when it fails (see Eikonal equation in Table 1). Our method provides good results even in those cases, aside from being adapted to other settings, such as solving parametric BVPs (Sec. 5.2) and interpolating existing solutions (Sec. 5.4).\"}", "{\"comment\": \"> **W6:** Highlight any novel theoretical results or empirical improvements over state-of-the-art methods\\n\\n**Response:**\\nWe repeat here our list of contributions (as summarized in lines 54-65):\\n\\n- prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2);\\n- incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2);\\n- develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yield trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4);\\n- illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nAs we have elaborated our responses so far, none of the previous works solve a constrained learning problem with worst-case losses, namely (PIII). As we prove in Prop. 3.2, this is required to obtain weak solutions of BVPs. Hence, in contrast to other approaches, Alg. 1 does (approximately) solve (BVP) (see response to W2). What is more, Alg. 1 can seemlessly tackle both unsupervised (as in most papers mentioned by the reviewer) and supervised problems (Sec. 5.4).\\n\\nOur empirical results show that we either match or outperform traditional PINN baselines and R3 (Daw et al., 2023). Even in cases where they fail (eikonal equation in Table 1), our method finds a good solutions (without modifications, see Appendix D). Alg. 1 can also simultaneously solve entire families of PDEs (Sec. 5.2) and incorporate both structural knowledge (Sec. 5.3) and data (Sec. 5.4). In the latter case, it also provides measure of trustworthiness for the solution by pointing out data points that are hard to fit (by means of the dual variables $\\\\lambda$, e.g., Fig. 3).\"}", "{\"comment\": \"> **W4:** (Minor) (Lines 1730, 1733, etc.) Fig -> Fig.\\n\\n**Response:**\\nWe have fixed it and proofread the manuscript for other typos.\"}", "{\"comment\": \"Thank you for the clarification and discussion, which corrected some of my misunderstandings. I have also gone through other reviews and author's responses, and I found that the paper would benefit from a major revision in my opinion.\\nCould you submit a revised paper? I would like to read it before sharing additional comments and replies; I am willing to change my score after reading it and rebuttal.\\n\\nThank you.\"}", "{\"comment\": \"Following the reviewer suggestion, we have updated our manuscript (see main comment for a list of modifications, currently marked in blue in the paper). We hope they find that these changes make the paper clearer and address their concerns directly in the manuscript. But we are happy to provide further details and clarifications if necessary.\"}", "{\"comment\": [\"> **W2:** The dependence on random seeds is unknown, potentially causing reproducibility issues. Error bars should be added.\", \"**Response:**\", \"The reviewer has a point. We are running additional experiments with different seeds for the camera-ready. In the meantime, we report here preliminary results to reassure them that our results are not cherry-picked. We ran ten experiments with different seeds for the convection equation both in the case of solving a specific PDE (Sec. 5.1, $\\\\beta=30$ as in Table 1) and solving a parametric family of BVPs (Sec. 5.2, $\\\\beta \\\\in [1, 30]$ compared to the finest discretization in Table 7 and Fig. 1a).\", \"Solving a specific PINN: mean (standard deviation)\", \"**PINN:** 1.19 (0.56) %\", \"**R3:** 1.00 (0.09) %\", \"**(SCL)(M):** 1.02 (0.39) %\", \"Solving parametric families of BVPs: mean (standard deviation)\", \"**(PI):** 4.03 (3.39) %\", \"**(SCL')(M):** 1.11 (0.22) %\"]}", "{\"comment\": \"> **W3:** The paper is difficult to follow. The paper benefits from paragraph writing, and please present information in relevant paragraphs and sections. It was challenging to discern whether certain sentences conveyed key ideas or were merely supplementary notes.\\n\\n**Response:**\\nWe will copy-edit the paper for clarity (particularly as we shorten the introductory sections to make space for additional material suggested by Reviewer x438). However, we do not see the need to make major changes to the presentation (aside from the list of contributions above) since other reviewers do not appear to have issues with it (our presentation scores range from 3 to 4). We understand, however, that clarity is in the eye of the beholder and would be happy to modify and add clarifications to specific parts that the reviewer found particularly difficult to follow.\"}", "{\"comment\": \"> **Q4:** (Lines 472-) \\\"it (causality) arises naturally by solving (SCL')(M). As training advances, however, Alg. 1 shifts focus to fitting higher convection speeds $\\\\beta$. Note that this occurs without any prior knowledge of the problems or manual tuning.\\\": Could you clarify why causality naturally arises in this context?\\n\\n**Response:**\", \"we_repeat_here_the_full_quote_from_the_manuscript_for_context\": \">> by inspecting $\\\\psi_0^\\\\text{PDE}$ at different stages of training (Fig. 1b) it becomes clear that SCL begins by fitting the solution 'causally,' focusing first on smaller values of $t$. While doing so has been proposed to improve training (Krishnapriyan et al., 2021; Wang et al., 2024), it arises naturally by solving (SCL')(M). As training advances, however, Alg. 1 shifts focus to fitting higher convection speeds $\\\\beta$. Note that this occurs without any prior knowledge of the problems or manual tuning.\\n\\nIn the first part, we refer to \\\"causality\\\" in terms of \\\"focusing first on smaller values of $t$.\\\" This is an approach that has been put forward in prior works to address PINN failures, such as (Krishnapriyan et al., 2021; Wang et al., 2024). We do not encourage this \\\"causal\\\" fitting of the solution in any way. However, we observe that, for certain BVPs (e.g., the convection PDE in Fig. 1), Alg. 1 tends to focus on earlier time instants in the beginning of training (i.e., $\\\\psi_0$ puts more mass on smaller values of $t$, Fig. 1b). This is not enforced by the algorithm, but happens \\\"naturally.\\\" This is an important fact given that there are cases where this notion of causality is not clear (e.g., the Helmholtz equation in Fig. 10).\\n\\nIn fact, Alg. 1 adapts to the underlying needs of the problem without any modification. Indeed, note that $\\\\psi_0$ is actually a joint distribution of collocation points and parameters $(x,t,\\\\beta)$. Hence, while in later stages of training Alg. 1 does not distinguish between values of $t$ (it has a uniform marginal), it does tend to focus more on higher convection speeds (larger $\\\\beta$). Once again, this is not a behavior we encourage based on prior knowledge, but something that arises \\\"naturally\\\" from the dynamic of Alg. 1.\\n\\nWe hope that this makes our assertion clearer. We see now that these statements could be confusing as they are written and will reformulate them to better separate the \\\"causality\\\" arising from sampling $t$ from the changes in $\\\\beta$.\"}", "{\"comment\": \"> **Q5:** Under what conditions, does Algorithm 1 converge? (Chamon et al., 2023; Elenter et al., 2024) could be valuable references, to my understanding.\\n\\n**Response:**\\nConvergence of primal-dual methods such as Alg. 1 in non-convex settings is the subject of active research, see, e.g.,\\n\\n- Yang et al., \\\"Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems,\\\" 2020\\n- Lin et al., \\\"Near-optimal algorithms for minimax optimization,\\\" 2020\\n- Fiez et al., \\\"Global convergence to local min-max equilibrium in classes of nonconvex zero-sum games,\\\" 2021\\n- Boroun et al., \\\"Accelerated primal-dual scheme for a class of stochastic nonconvex-concave saddle point problems,\\\" 2023\\n\\nAs we explain in the paper (line 377), Alg. 1 is actually a variant of the dual ascent method shown in (3). For the latter method, (Elenter et al., 2024, Prop. 4.1) provides a convergence guarantee as long as the losses are strongly convex and smooth (satisfied by our quadratic losses) and the model capacity is large enough (i.e., we use large neural networks). Note that despite the convexity of the losses, the resulting problem is not convex due to the non-linearity of the models.\\n\\nIn this setting, (Elenter et al., 2024, Prop. 4.1) guarantees much more than simply near-optimality of the solution [as is the case, e.g., of (Chamon et al., 2023, Thm. 2)], but also near-feasibility, side-stepping primal recovery issues found even in convex optimization [see, e.g., (Nedic & Ozdaglar, \\\"Approximate primal solutions and rate analysis for dual subgradient methods,\\\" 2009.)]. Transfering these results to Alg. 1 requires an additional step size separation condition [see (Yang et al., 2020) mentioned above for specific results]. Though studying the convergence properties of Alg. 1 are beyond the scope of this paper, we will include a remark with the above discussion in the revised manuscript.\"}", "{\"comment\": \"> **W2:** So we might instead take the contribution of the paper to be the guarantee, except: As stated in the paper, existing methods already have guarantees.\\n\\n**Response:**\\nTo the best of our knowledge, no prior methods guarantee the solution of BVPs (even approximately). We also could not find such claim in our manuscript. Explicitly, the contributions of our paper are (as summarized in lines 54-65):\\n\\n- prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2);\\n- incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2);\\n- develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yield trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4);\\n- illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nAlg. 1 is different from prior work as it arises directly from Prop. 3.2. Prop. 3.2 proves that weak solutions of BVPs are obtained by **solving (1) constrained learning problems with (2) worst-case losses** as in (SCL). Hence, it is not enough to use only constrained or Lagrangian formulations as in (Lu et al., 2021b; Basir & Senocak, 2022) or other adaptive loss weighting schemes as in (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023). Indeed, note that using a constrained formulation with fixed collocation points does not recover a solution of the convection PDE (Fig. 2). It is also not enough to use only worst-case loss approximations as in (Wang et al., 2022a; Daw et al., 2023). Indeed, R3 from (Daw et al., 2023) fails to recover a solution for the Eikonal equation (Table 1). In contrast, Alg. 1 succeeds in both cases (see Table 1).\\n\\nProp. 3.2 shows that the limitations of previous approaches are not methodological, but epistemological. It is not an issue of *how* they solve the problem, but *which* problem they solve. In contrast, by (approximately) solving (SCL), Alg. 1 (approximately) solves (BVP). What is more, Alg. 1 also applies to other settings, such as solving parametric BVPs (Sec. 5.2) and interpolating existing solutions (Sec. 5.4).\\n\\nProp. 3.2 also establishes the limitations of learning approaches that can only find very smooth solutions for high-dimensional state spaces (essentially, solution in the Sobolev space $W^{(d+1)/4,2}$). This can be an issue for large-scale dynamical systems, such as those found in smart grid applications, or when transforming higher-order PDEs in higher-dimensional first-order systems.\"}", "{\"comment\": \"> **Q1:** The proposed formulation seems to be a rephrasing of PDE boundary value problems with invariance, observation data into a single constraint problem. The resulting Lagrangian problem is then solved straightforwardly in Algorithm 1, reminiscent of standard multi-task learning algorithms. Is there more to it than this?\\n\\n**Response:**\\nWhile the transformations mentioned by the reviewer may appear straightforward, they are not. In fact, none of the derivations needed to obtain Alg. 1 from (BVP) hold in general.\\n\\n- **From (BVP) to (SCL)**: In Prop. 3.2, we prove that a weak solution of (BVP) can be obtained by **solving (1) a constrained learning problems with (2) worst-case losses**. Hence, as we detail in W1, solving a constrained problem is not enough to solve (BVP). In fact, using a constrained formulation with fixed collocation points fails to find a solution to the convection PDE (Fig. 2), while Alg. 1 succeeds (Table 1). This relation is not trivial and establishes limitations of learning approaches (see response to W1). By then applying Prop. 3.1, we are able to transform the worst-case losses into statistical losses (against the specially crafted distributions $\\\\psi_\\\\alpha$), leading to (SCL).\\n\\n- **From (SCL) to the empirical dual problem (3)**: As we explain in Sec. 3.1 and W1, the relation between (SCL) and its Lagrangian dual of the form ($\\\\hat{\\\\text{D}}$-CSL) is not straightforward due to non-convexity. This is an issue because primal-dual and dual ascent methods such as Alg. 1 and other Lagrangian-based approaches (Lu et al., 2021b; Basir & Senocak, 2022) solve dual problems and not (SCL). We overcome this issue by using statistical losses and non-convex duality results from (Cotter et al., 2019; Chamon & Ribeiro, 2020; Chamon et al., 2023; Elenter et al., 2024), obtaining convergence guarantees for (3), a variant of Alg. 1 (see Q5 for more details on convergence).\\n\\nIt is important to contrast Alg. 1 with other primal-dual methods from multi-task learning and previous works (Lu et al., 2021b; Wang et al., 2021a; Basir & Senocak, 2022; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023). Indeed, Prop. 3.2 shows that the limitations of previous approaches are not methodological, but epistemological. It is not an issue of *how* they solve the problem, but *which* problem they solve. Alg. 1, however, (approximately) solves (SCL), a constrained problem with worst-case losses, and therefore (approximately) solves (BVP). As we illustrated in W1, this issue is not merely theoretical and can cause failures in practice (see Table 1 and Fig. 2).\"}", "{\"comment\": \"> **W5:** The use of the Lagrangian does indeed support the idea that the method addresses the issue of balancing the weights in the loss function which can affect PINNs. However the paper also makes the statement that this new methodology addresses sensitivity to the number of training points. I cannot really see a clear explanation in the paper for why this might be the case. Either this explanation should be brought out more and clarified in the paper or, alternatively, it should be explained that this is an empirical observation.\\n\\n**Response:**\\nWe assume that by \\\"sensitivity to the number of training points\\\" the reviewer refers to our claim that our method is not as sensitive as PINNs to the choice of collocation points (please do not hesitate to correct us if that is not the case).\\n\\nThat the choice of collocation points can greatly impact the quality of the solution of PINNs has been observed and investigated before (Nabian et al., 2021; Daw et al., 2023; Wang et al., 2024). We illustrate this in Fig. 2, where even constrained methods are unable to solve a convection BVP when using *fixed* collocation points. This hyperparameter is completely removed in (SCL), since evaluation points are chosen in Alg. 1 according to distributions $\\\\psi_0$ that are adapted both to the PDE and the model $u_{\\\\theta}$. However, this is not just a convenience: Prop. 3.2 clearly shows that only by combining constrained formulations with worst-case losses as in (SCL) can we solve (BVP).\\n\\nFig. 2 also illustrates how (SCL) can be made less sensitive to collocation points by incorporating scientific knowledge. Indeed, when using a small number of *fixed collocation points* neither (PI) (which uses hand-tuned weights to combine the losses) nor (SCL)(M) (which uses a constrained formulation) are able to find a solution of the BVP. Yet, when incorporating structural information (in this case, invariance), an accurate solution is obtained by (SCL)(M+S).\\n\\nThis insensitivity also appears when solving for parametric families of PDEs. Note from Fig. 1 that the finer the discretization of the parameter range, the more collocation points are used by (PI) (as the legend suggests, (PI) uses 1000 collocation points per parameter). Clearly, automatically selecting collocation points can bring performance advantages (see, e.g., Table 7). Once again, this is not only convenient, but also necessary to provide (weak) solutions of BVPs (Prop. 3.2).\\n\\nWe have included this point in the contribution list of the introduction and will more explicitly point to evidence that support it (both methodological, i.e., Alg. 1, and empirical, such as Figs. 1 and 2) wherever it appears.\"}", "{\"metareview\": \"This work shows that obtaining weak solutions to PDEs can be formulated as a constrained learning problem with worst case losses. Furthermore, it develops an efficient algorithm for solving this problem and incorporating further structural information that maybe be known. The method is evaluated on a variety of tasks and the comparisons are extensive.\", \"additional_comments_on_reviewer_discussion\": \"While I also share one of the reviewers' concern that many works which solve PDEs with ML do not give a fair comparison with traditional methods,I believe that authors have managed to do this well here (in rebuttal) and their overall numerics are extensive. For some problems, it well known that traditional methods are very slow and it need not be that every new paper in this field re-iterates this point. The method proposed by the authors is, as far as I know, novel, and their results show significant improvement over other PINN-based methodologies. While I do think that general claims about beating traditional solvers should be toned down, I think the added numerics make a significant case for the publication of this work. As the first work to introduce the methodology, it cannot be expected that it will beat all traditional methods on all problems. The addition of cost/accuracy trade-off curves is the right direction for all works in this field.\"}", "{\"comment\": \"> **W1:** The reviewer is having a difficult time distinguishing the contribution from the work of Paris Perdikaris and his group. Examples are:*\\n> \\n> A. Daw, J. Bu, S. Wang, P. Perdikaris, A. Karpatne, Rethinking the Importance of Sampling in Physics-informed Neural Networks, arXiv preprint arXiv:2207.02338\\n> S. Wang, S. Sankaran, P. Perdikaris, Respecting causality is all you need for training physics-informed neural networks, arXiv preprint arXiv:2203.07404\\n> \\n> Paris references in one of his talks a paper by this former advisor and colleagues \\\"A unified scalable framework for causal sweeping strategies for ...\\\" that has a categorization (and references).\\n\\n\\n**Response:**\\nThe main contributions of this work (summarized in lines 54-65) are\\n\\n- prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2);\\n- incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2);\\n- develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yield trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4);\\n- illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nAs suggested by x438, we will include this explicit list in the introduction.\\n\\nProp. 3.2 in particular is key as it shows that the limitations of previous approaches are not methodological, but epistemological. It is not an issue of *how* they solve the problem, but *which* problem they solve. Indeed, Prop. 3.2 shows that it is not enough to use either worst-case losses as in (Wang et al., 2022a; Daw et al., 2023) or adapting the loss weights as in (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023), even using constrained formulations (Lu et al., 2021b; Basir & Senocak, 2022).\", \"these_are_not_purely_theoretical_issues\": \"using (PI) with R3 to approximate worst-case losses fails to recover a solution for the Eikonal equation (Table 1) and using a constrained formulation with fixed collocation points fails to recover a solution for the convection PDE (Fig. 2). Both are needed simultaneously, which is why Alg. 1 succeeds in both cases (see Table 1).\\n\\n(continued in the next comment)\"}", "{\"comment\": \"> **W4:** Otherwise the reader is left wondering what the difference is to e.g. the work in [1], which uses derivative constraints and empirical observations.\\n\\n**Response:**\\nThe problem that Alg. 1 is tackles is substantially different from that in [1]. First, [1] controls the average value of derivatives on an empirical dataset (see [1, eq. 9]). In contrast, (BVP) [and (SCL)] require that the differential equation hold for all points in the domain. Second, [1] uses fixed weights to incorporate the derivative constraints as penalties in the loss. In that sense, it is similar to (PI) and (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023). This, however, is not enough to solve (SCL) due to its non-convexity (see duality discussion in, e.g., response W2 to Reviewer 4CoR).\\n\\nWe address the first point by relying on Prop. 3.1, which turns a worst-case loss into a statistcal loss with respect to a specially crafted distribution ($\\\\psi_\\\\alpha$). We can then write the constrained *learning* problem (SCL) using statistical rather than deterministic losses, collocation points. This allows us to handle the second point by applying the non-convex duality results from (Chamon & Ribeiro, 2020; Chamon et al., 2023) to provide approximation guarantees for the solution of its dual problem, of the form ($\\\\hat{\\\\text{D}}$-CSL). Combined with results from (Cotter et al., 2019; Elenter et al., 2024), this yields convergence guarantees for (3), a variant of Alg. 1 (as we explain in line 377) towards a (probably approximately) near-feasible and near-optimal solution of (SCL). By Prop. 3.2, this provides a (weak) solution of (BVP).\\n\\nWe hope that this parallel with [1] clarifies our differences in terms of problem and methodologies. We agree with the reviewer that these clarifications should appear earlier in the paper and expect that modifying the contributions and abstract (as well as reducing the introductory sections) will help with this goal.\"}", "{\"comment\": \"> **W4:** Provide a more comprehensive literature review that clearly positions their work relative to existing constrained learning approaches for PINNs?\\n\\n**Response:**\\nWe refer the reviewer to our related work section (Appendix F), where we review the literature in general. More specifically, we compare there and throughout our paper to (Lu et al., 2021b; Basir & Senocak, 2022) which are the two references we found that explicitly use constrained formulations. However, as we have argued in our response to W3, they do not show that their Lagrangian-based algorithm actually solves the constrained problem of interest given that it is non-convex. Hence, (Lu et al., 2021b; Basir & Senocak, 2022) provide solutions to dual problems of the form ($\\\\hat{\\\\text{D}}$-CSL) in Sec. 3.1 and not to their constrained counterparts. We overcome this challenge by using the *constrained learning* formulation (SCL), i.e., a statistical rather than deterministic optimization problem (see more details in the response to W3).\\n\\nNote also that Alg. 1 is different from all the papers mentioned so far in that is *also* uses statistical formulations of worst-case losses (namely, the $\\\\psi_0$). This is not a small distinction as it is *necessary* to obtain (weak) solutions of BVPs (as we prove in Prop. 3.2). In fact, Prop. 3.2 shows that neither the constrained formulations of (Lu et al., 2021b; Basir & Senocak, 2022) nor the adaptive loss weighting schemes of (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023) are suuficient to solve (BVP). We refer the reviewer to W3 for a detailed discussion and failure examples.\\n\\nWe will include update Appendix F to include this more detailed discussions as well as the two additional references mentioned by the reviewer, namely (Penwarden et al., 2023) and (Rehman & Lienhard, 2024). Do not hesitate to let us know if we left out any additional important work.\"}", "{\"summary\": \"This paper aims to adapt ideas from adversarial robustness and use them to find neural network solutions to partial differential equations with improved guarantees over existing neural solutions to PDEs. The scheme, as I understand it, is to observe that the weak form of the solution to a PDE can be achieved by finding the minimum of a kind of \\\"worst-case PINN loss\\\" in which the distribution over collocation points is chosen to maximize the expected PINN loss. This framing allows a particular result from adversarial robustness to apply: the worst-case (wrt distributions on inputs) expected loss on a region can be approximated by an expectation under a distribution proportional to the shifted-and-truncated loss on that region. The amount of shifting governs the accuracy of the approximation. I understand this to be a kind of continuity argument: if the worst-case distribution is concentrated on the points with the highest loss, then smoothing those out a bit doesn't change the value all that much.\\n\\nThe result is that you can get a reframing of the weak problem to look sort of like a PINN with a particular distribution on the collocation points. The paper discusses how various kinds of constraints, e.g., symmetries, can then be framed in this way. The actual sampling itself of the \\\\psi_\\\\alpha distribution is done with Metropolis-Hastings, or it is discarded in favor of a uniform distribution. The approach is empirically evaluated on a couple of small problems and compared to some neural network baselines. No comparisons to conventional PDE solvers are performed.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"I think the main strength of this paper is that it allows one to reason more formally about the PINN-type setup and connect it directly to the weak form solution. It's a creative idea to adapt the result from adversarial robustness. The paper is mostly well written.\", \"weaknesses\": \"The main weakness of the paper is that we don't get any sense of how well this actually works for solving PDEs. Like many \\\"deep learning for PDEs\\\" papers, it does not compare to conventional methods for solving PDEs, but only other neural networks. It is currently unclear whether PINNs and related ideas are actually ever a good idea. Very few papers examine the actual Pareto curve of computational cost versus accuracy with respect to, e.g., FEM. A recent paper reviewing the area has shed light on just how bad this situation is from a scientific perspective:\\n\\nMcGreivy, Nick, and Ammar Hakim. \\\"Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations.\\\" Nature Machine Intelligence (2024): 1-14.\\n\\nThe MH24 paper criticizes weak standard numerical baselines, but the present paper does not compare to any conventional numerical methods at all. Do we conclude from this paper that I should stop using my PDE solver and start using this method? The abstract claims \\\"accurate solutions\\\" but then only demonstrates that relative to deep learning baselines on tiny problems.\\n\\nAnd to be clear, this situation is not really the authors' fault. They are just following the trend and applying the standard that the neural PDE solver community has created. Nevertheless, the for the larger scientific community to find this work valuable, we must apply a standard that makes sense for people who want to actually solve PDEs.\\n\\nSo we might instead take the contribution of the paper to be the guarantee, except:\\n\\n1) As stated in the paper, existing methods already have guarantees.\\n\\n2) The guarantee itself centers on sampling from \\\\psi_\\\\alpha which, as mentioned in the paper, becomes difficult exactly when the error bound becomes tight. Using Metropolis--Hastings essentially throws that guarantee away unless you can prove, e.g., uniform ergodicity of the resulting Markov chain. So in the end we are faced with what looks like just another difficult integral, just with a different form and a lack of a guarantee. Note that one could raise the same criticism of the original adversarial robustness result: sampling from little regions around the worst case inputs is just as difficult as finding them in the first place.\", \"questions\": \"What did you use to compute the ground truth solutions? How long did it take to compute those ground truth solutions relative to the methods compared in the paper? What I'd like to see here is a thoughtful comparison with conventional methods and points along the Pareto curve trading off compute time against accuracy (here, that means the fineness of discretization for, e.g., FEM).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Following the reviewers comments, we have updated our manuscript (see main comment for a list of modifications, currently marked in blue in the paper). We hope the reviewer finds these minor additions and changes make the paper clearer and address their concerns directly in the manuscript. We are happy to provide further details and clarifications if necessary.\"}", "{\"comment\": \"> **Q2:** (Lines 214-) \\\"Compared to penalty methods, (\\\\^{D} -CSL) does not require extensive tuning of coefficients [such as $\\\\mu$ in (PI)] and guarantees generalization individually for each constraint (near-optimality and near-feasibility) rather than for their aggregated value.\\\": Could you elaborate on this? It seems to be a key statement regarding the paper's contribution. Could you provide some references to support this?\\n\\n**Response:**\\nThis statement is based on the references cited immediately after on line 217, namely (Chamon & Ribeiro, 2020; Chamon et al., 2023). When solving problems such as (PI), i.e., where losses are aggregated into a single objective with fixed hypermarapeters (weights) $\\\\mu$, classical learning theory only provides generalization guarantees about the value of that aggregate loss. In other words, we can show that\\n$$\\n\\\\mu_D \\\\ell_D(\\\\theta^\\\\star) + \\\\mu_{BC} \\\\ell_{BC}(\\\\theta^\\\\star)\\n \\\\approx \\\\mathbb{E} \\\\big[ \\\\mu_D \\\\ell_D(\\\\theta^\\\\star) + \\\\mu_{BC} \\\\ell_{BC}(\\\\theta^\\\\star) \\\\big],\\n$$\\nwhere the empirical losses $\\\\ell$ are defined in (PI). This is different from saying that $\\\\ell_D(\\\\theta^\\\\star) \\\\approx \\\\mathbb{E} \\\\big[ \\\\ell_D(\\\\theta^\\\\star) \\\\big]$ and $\\\\ell_{BC}(\\\\theta^\\\\star) \\\\approx \\\\mathbb{E} \\\\big[ \\\\ell_{BC}(\\\\theta^\\\\star) \\\\big]$, which is required to obtain approximate solutions of (PIII) [and (SCL)]. This is what using the max-min problem ($\\\\hat{\\\\text{D}}$-CSL) enables. As we argued in Q1, this is not straightforward for non-convex optimization problems, such as (PI) and (SCL). We overcome this issue by using the non-convex duality results from (Chamon et al., 2023, Thm. 1). Combined with results from (Cotter et al., 2019; Elenter et al., 2024), we can obtain convergence guarantees for the variant of Alg. 1 presented in (3) (as we explain in the paper, line 377) towards a (probably approximately) near-optimal and near-feasible solution of (SCL) (see Q5 below for more details on convergence). Using Prop 3.2, we can finally show that such a solution is in fact also a (probably approximately) weak solution of (BVP).\\n\\nSince these learning theoretic details are not the focus of this work, we did not emphasize them in the manuscript. But we agree with the reviewer that these are needed to better understand the distinction between (SCL) and (PI). We will expand on these points in the text and include a more thorough discussion on the subject in the appendix.\"}", "{\"title\": \"Revised manuscript\", \"comment\": [\"Following the reviewers comment, we have updated our manuscript as per our responses. We have marked all changes in blue. In particular, here is a general list of modifications:\", \"we replaced our summary of contributions at the end of the introduction by an explicit list;\", \"we summarized the introductory material in Sections 2 and 3 to bring our main development (Sec. 3) earlier in the manuscript. In particular, we have moved our preliminaries on constrained learning (Sec. 3.1) to the algorithm development section (Sec. 4);\", \"we included new remarks on Prop. 3.2 (now Prop. 3.1), the main theoretical contribution of the paper, taken from our responses to the reviewers;\", \"we expanded the derivations of our algorithm (Sec. 4), including a more detailed development of how to go from the problem (SCL) to Alg. 1. We also included more details on generalization and convergence of MH and primal-dual methods (see also App. C and D).\", \"we reinforced our previous remarks on the use cases where we believe NN-based solvers could be advantageous and included the preliminary results of our comparison to FEM solvers (Sec. 5.2);\", \"we added our preliminary results for the convection PDE (\\\"error bars\\\" in Table 1). We continue to run new simulations and will update the results as we obtain them.\", \"We hope the reviewers will find that these additions and rearrangements make the paper clearer and address their concerns directly in the manuscript. We are happy to continue discussing if they find other points that could improve our paper.\"]}", "{\"comment\": \"> **Q1:** Can the authors clarify how their approach (for instance Algorithm 1 updating schedule) compares to the update schedule in Paris' paper: https://epubs.siam.org/doi/pdf/10.1137/20M1318043 Section 2.4 (not the specific updates, but the general updates as given by the spirit of the logic presented)? In others of Paris' papers, he points to : https://arxiv.org/pdf/2007.04542. How does the new method compare to the \\\"adding weights to the loss function\\\" section 2.2.1 (Which is analogous to Paris' work also)?\\n\\n**Response:**\\nThe references mentioned by the reviewer are (Wang et al., 2021a) and (Wight & Zhao, 2021) in our manuscript respectively and we compare our work to them in Appendix F. In particular, we recall that Prop. 3.2 shows that neither adaptive sampling techniques nor loss weighting schemes are sufficient to solve BVPs. In that sense, both references fall short of solving BVPs. Specifically,\\n\\n- **https://epubs.siam.org/doi/pdf/10.1137/20M1318043, i.e., (Wang et al., 2021a) in our manuscript.** In Sec. 2.4 and 2.5, it proposes to set the weights of the different loss terms [$\\\\mu$ in (PI)] so that the gradients of the different terms have similar magnitudes. In that sense, (Wang et al., 2021a, eq. 40-41) are reminiscent of Adamax (Kingma & Ba, 2015). This is fundamentally different from Alg. 1. While (Wang et al., 2021a, eq. 35) may have the same form as the Lagrangian in (2), they are used in different ways. The dual problem ($\\\\hat{\\\\text{D}}$-CSL), that Alg. 1 targets, does not seek to balance gradient updates. In fact, the resulting contribution of **the losses may very much need to be unbalanced** (see, e.g., Fig. 16a in Appendix E which shows certain dual variables $\\\\lambda$ vanishing completely). The dual problem that Alg. 1 targets actually approximates the solution of the constrained problem (SCL) (as we explain in W4 above). This is essential, together with sampling from $\\\\psi_0$, to obtain a solution of (BVP) (as we prove in Prop. 3.2).\\n\\n- **https://arxiv.org/pdf/2007.04542, i.e., (Wight & Zhao, 2021) in our manuscript.** In Sec. 2.2.1, it proposes to add a **fixed weight** to the \\\"data loss,\\\" i.e., to reweight the loss relative to initial training data with respect to the losses relative to the PDE and BCs. This is not sufficient to obtain a solution of the constrained problem (SCL) (due to its non-convexity, see response to W2) that must be solved in order to obtain a weak solution of (BVP) (see Prop. 3.2). What is more, (SCL)(O) and Alg. 1 weight each data point individually to optimize the worst-case (rather than the average) fit. This leads to measures of trustworthiness for the solution as it singles out the data points that are hard to fit (by means of the dual variables $\\\\lambda$, e.g., Fig. 3).\\n\\nIn summary, \\\"adding weights to the loss function\\\" is not sufficient to solve (BVP), especially if the weights are fixed or attempting to balance the contributions of the gradients. While we extensively cover the theoretical reasons for this here (see responses to W1, W2, and W4), we also provide substantial empirical evidence that this theoretical foundation does manifest in practice (such as Table 1, Fig 2, and Fig. 16a).\"}", "{\"comment\": \"> **W1:** The key contribution, Algorithm 1, can be seen as a variant of DNN training with learnable loss weights, which can often be found in multi-task learning, lacking novelty.\\n\\n**Response:**\\nWhile Alg. 1 is certainly important, we do not believe it to be the key contribution of this paper. In fact, as we note in our response to Reviewer 4CoR, Alg. 1 is based on constrained optimization duality, so that the updates in step 8-9 are reminiscent of classical primal-dual or dual ascent methods going back to [Arrow et al., \\\"Studies in linear and non-linear programming,\\\" 1958]. What is crucial of Alg. 1 is the combination of such primal-dual methods with the sampling-based approximation of worst-case losses (steps 4-7).\\n\\nWe prove that this combination is needed to solve BVPs in Prop. 3.2. It is therefore not enough to use either constrained optimization and Lagrangian formulations as in (Lu et al., 2021b; Basir & Senocak, 2022) or worst-case losses as in (Wang et al., 2022a; Daw et al., 2023). Indeed, note that using a constrained formulation with fixed collocation points does not recover a solution of the convection PDE (Fig. 2) and that R3 from (Daw et al., 2023) fails to recover a solution for the Eikonal equation (Table 1). In contrast, Alg. 1 succeeds in both cases (see Table 1) because it **solves (1) constrained learning problems with (2) worst-case losses**.\\n\\n1. **Solving constrained learning problems**: As Reviewer x438 notes, Alg. 1 is based on constrained optimization duality and tackles max-min problems of the form ($\\\\hat{\\\\text{D}}$-CSL) from Sec. 3.1. This is also the case for other approaches involving (augmented) Lagrangian formulations of PINNs, such as (Lu et al., 2021b; Basir & Senocak, 2022). Yet, this is **not** the same as solving (SCL), the constrained optimization problem arising from Prop. 3.2 because they are not convex.\\n\\n We overcome this issue by posing a constrained *learning* problem (i.e., with statistical losses), namely (SCL), which allows us to use non-convex duality results from (Chamon & Ribeiro, 2020; Chamon et al., 2023) to provide approximation guarantees between (P-CSL) and ($\\\\hat{\\\\text{D}}$-CSL). Combined with results from (Cotter et al., 2019; Elenter et al., 2024), we can obtain convergence guarantees for the variant of Alg. 1 presented in (3) (as we explain in the paper, line 377) towards a (probably approximately) near-feasible and near-optimal solution of (SCL) (see response to Q5 for more details on convergence).\\n\\n2. **Worst-case losses**: Using steps 8-9 with fixed collocation points (regardless of their distribution) is still not enough to obtain a solution of a BVP: Prop. 3.2 also requires the use of worst-case losses. Alg. 1 leverages Prop. 3.1 to replace these worst-case losses by statistical losses against specially designed distributions, namely the $\\\\psi_0$ in steps 4-7. It is the combination of these two properties (duality of constrained learning problems and worst-case formulation) that ensures that Alg. 1 seeks a weak solution of the BVP.\\n\\nMore explicitly, our contributions, as summarized along lines 54-65, are therefore:\\n\\n- prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2);\\n- incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2);\\n- develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yield trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4);\\n- illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nAs Reviewer x438 and 4CoR suggest, we will include this explicit list in the camera-ready. It is worth noting that Prop. 3.2 is not trivial in that it establishes limits under which learning approaches work. Indeed, they can only find very smooth solutions for high-dimensional state spaces (essentially, solutions in the Sobolev space $W^{(d+1)/4,2}$). This can be an issue for large-scale dynamical systems, such as those found in smart grid applications, or when transforming higher-order PDEs in higher-dimensional first-order systems.\"}", "{\"comment\": [\"> **Q3:** (Lines 58-) \\\"Instead, we use semi-infinite constrained learning techniques to develop a hybrid sampling-optimization algorithm that tackles both problems jointly without extensive hyperparameter tuning and training heuristics (Sec. 4).\\\": I am not fully convinced by this claim. Algorithm 1 still includes hyperparameters, such as $\\\\eta_p$ and $\\\\eta_d$, and training heuristics are still necessary.\", \"**Response:**\", \"**Hyperparameters**: Alg. 1 has hyperparameters, namely step sizes ($\\\\eta_p$, $\\\\eta_d$) and the number of samples in steps 4-7 ($N$). This is already less than for (PI) or other Lagrangian-based methods, such as (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023), that must choose a weight $\\\\mu$ for each loss. But most importantly, the nature of these hyperparameters is fundamentally different. The weights $\\\\mu$ from (PI), in particular, need to be retuned whenever the PDE (or parameters) changes. Even the addition of collocation points to (PI) may change the balance of the losses and require adjusts to $\\\\mu$. That is without considering the fact that using fixed $\\\\mu$ do not provide a solution of (BVP) (see response to W1 and W3). As for the collocation points, they are sampled from $\\\\psi_0$ to estimate worst-case losses. Hence, Alg. 1 can often operate with smaller $N$ than when sampling uniformly at random (particularly when solving parametrized families of PDEs as in Fig. 1a).\", \"**Training heuristics**: Alg. 1 does *not* rely on training heuristics (such as adaptive or causal sampling, *ad hoc* weight updates, or conditional updates). All steps are justified as approximations in the manuscript. More specifically,\", \"steps 4-7 are empirical approximations of expections with respect to $\\\\psi_0$, which Prop. 3.1 show approximate the worst-case losses in (SCL);\", \"this sampling could be performed by any MCMC method. We use the Metropolis-Hastings algorithm described in Alg. 2 (Appendix C), which provides approximate samples from $\\\\psi_0$ [see, e.g., (Robert & Casella, 2004) and response to Reviewer JwZx for a more detailed discussion of its convergence properties];\", \"steps 8-9 describe a traditional primal-dual (gradient descent-ascent) algorithm for solving max-min problems such ($\\\\hat{\\\\text{D}}$-CSL). Guarantees on generalization of this solution are given in (Chamon et al., 2023), which in combination with results from (Cotter et al., 2019; Elenter et al., 2024) provides guarantees that (3), a variant of Alg. 1, yields (probably approximately) near-optimal and near-feasible solution of (SCL) (see Q5 for more details);\", \"Prop 3.2 provides the last result needed to show that solutions of (SCL) approximate weak solutions of (BVP).\", \"We understand, however, that we may have missed something in the reviewer's comment. Please do not hesitate to let us know if this is the case, we would be happy to address their concerns.\"]}", "{\"comment\": \"> **W1:** The main weakness of the paper is that we don't get any sense of how well this actually works for solving PDEs. Like many \\\"deep learning for PDEs\\\" papers, it does not compare to conventional methods for solving PDEs, but only other neural networks...\\n\\n**Response:**\\nWe share the reviewer's concerns about the standards for neural PDE solvers and the lack of comparison to methods largely used by the scientific community. However, we respectfully disagree that \\\"we don't get any sense of how well this actually works for solving PDEs.\\\" We provide extensive reports on the L2 errors achieved by our approach as well as the methodology we use to compute those errors (Appendix D). This enables quantitative comparisons of our *implementation* of Alg. 1 with any other solutions (including FEM, see preliminary results below).\\n\\nThat is not to say that the reviewer (or anyone) \\\"should stop using [their] PDE solver.\\\" We completely agree that classical methods can achieve more precise solutions than NN-based ones, include (SCL). Yet, (SCL) enable new use cases. As we explicitly state in the manuscript (line 111): \\\"While they [NN-based methods] may not achieve the precision of classical methods [...], they are able to provide solutions for whole families of BVPs, extrapolating new solutions from existing ones, and leverage extrinsic information, such as real-world measurements.\\\" Hence, while Alg. 1 can be used to solve single PDEs (Sec. 5.1), we believe use cases not covered by traditional methods are more interesting. In fact, they form the bulk of our experiments in Sec. 5 and Appendix E. We believe that by simultaneously solving for a wide range of PDEs, Alg. 1 can decouple the computation from the use of PDE solutions, which is of great value in engineering applications, particularly in the design phase.\\n\\nTo showcase this fact, we provide preliminary results for solving the Helmholtz equation for a range of parameters using FEM and our method. In particular, we considered the Helmholtz equation with $(a_1, a_2) \\\\in [1,2] \\\\times [1,2]$ and compare SCL'(M) (see Sec. 5.2 and e.g., Table 9) with FEM:\\n- SCL'(M)\\n * Average relative $L_2$ error across an equispaced grid of 10 000 PDE parameters: 0.0125\\n * Training time: 31.4 hours\\n- FEM\\n * Average relative $L_2$ error across an equispaced grid of 25 PDE parameters: 0.0360 \\n * Time: 1.67 hours\\n\\nFor the above results for FEM, we computed 25 solutions corresponding to the 25 PDE parameters considered. Thus, for every new solution we wish to compute, it takes $\\\\frac{1.67 \\\\times 60}{25} = 4$ minutes. In contrast, SCL'(M) is trained only once and can thereafter provide new solutions very fast by making inference thourgh the trained NN. For example, if we would want to compute solutions on a dense grid of 10 000 PDE parameters as done for SCL'(M), this would take FEM approximatley 667 hours. Note that these numbers compare a highly-optimized C++ implementation of FEM with 20 years of development (implementation from Fenics) with a higher-level PyTorch implementation of Alg. 1.\\n\\nThat being said, we appreciate the reviewer for pointing out (McGreivy & Hakim, 2024), which supports our philosophy that learning approaches should be focusing on tackling the alternative use cases we mentioned above.\"}", "{\"summary\": \"This paper developed SCL, a technique for solving BVPs based on constrained learning.\\nIt then developed a practical algorithm to tackle these problems and showcased its performance across a variety of PDEs.\\nSCL not only yields accurate BVP solutions, but tackles many of the challenges faced by previous methods, such as extensive hyperparameter tuning and computational costs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors tackle important problems in physics-informed learning: reducing computational costs and avoiding hyperparameter tuning.\", \"The experimental results show that the proposed formulation outperforms baselines in some cases.\"], \"weaknesses\": [\"The key contribution, Algorithm 1, can be seen as a variant of DNN training with learnable loss weights, which can often be found in multi-task learning, lacking novelty.\", \"The dependence on random seeds is unknown, potentially causing reproducibility issues. Error bars should be added.\", \"The paper is difficult to follow. The paper benefits from paragraph writing, and please present information in relevant paragraphs and sections. It was challenging to discern whether certain sentences conveyed key ideas or were merely supplementary notes.\", \"I have several critical questions. Please see Questions below. I am open to discussion.\", \"(Minor) (Lines 1730, 1733, etc.) Fig -> Fig.\"], \"questions\": [\"The proposed formulation seems to be a rephrasing of PDE boundary value problems with invariance, observation data into a single constraint problem. The resulting Lagrangian problem is then solved straightforwardly in Algorithm 1, reminiscent of standard multi-task learning algorithms. Is there more to it than this?\", \"(Lines 214-) \\\"Compared to penalty methods, (D\\u02c6 -CSL) does not require extensive tuning of coefficients [such as \\u03bc in (PI)] and guarantees generalization individually for each constraint (near-optimality and near-feasibility) rather than for their aggregated value.\\\": Could you elaborate on this? It seems to be a key statement regarding the paper's contribution. Could you provide some references to support this?\", \"(Lines 58-) \\\"Instead, we use semi-infinite constrained learning techniques to develop a hybrid sampling-optimization algorithm that tackles both problems jointly without extensive hyperparameter tuning and training heuristics (Sec. 4).\\\": I am not fully convinced by this claim. Algorithm 1 still includes hyperparameters, such as $\\\\eta_p$ and $\\\\eta_d$, and training heuristics are still necessary.\", \"(Lines 472-) \\\"it (causality) arises naturally by solving (SCL\\u2032)(M). As training advances, however, Alg. 1 shifts focus to fitting higher convection speeds \\u03b2. Note that this occurs without any prior knowledge of the problems or manual tuning.\\\": Could you clarify why causality naturally arises in this context?\", \"Under what conditions, does Algorithm 1 converge? (Chamon et al., 2023; Elenter et al., 2024) could be valuable references, to my understanding.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Following the reviewers comments, we have updated our manuscript (see main comment for a list of modifications, currently marked in blue in the paper). We hope the reviewer finds these minor additions and changes make the paper clearer and address their concerns directly in the manuscript. We are happy to provide further details and clarifications if necessary.\"}", "{\"comment\": \"We thank the reviewers and AC for their time and feedback. We feel that it has greatly improved the presentation and clarity of our results. Below, we summarize the main concerns raised in the reviews. We address these in more detail in our point-by-point response to the reviewers.\\n\\n### Contributions\\nExplicitly, the main contributions of our paper are:\\n- Prove that obtaining a (weak) solution of (BVP) is equivalent to solving a constrained learning problem with worst-case losses, namely (PIII) (Prop. 3.2).\\n- Incorporate other scientific prior knowledge to (PIII), such as structural (e.g., invariance) and observational (e.g., measurements, known solutions) information, without resorting to specialized models or data transforms (SCL, Sec. 3.2).\\n- Develop a practical hybrid sampling-optimization algorithm (Alg. 1) that requires neither tuning weights to balance different objectives nor the careful selection of collocation points. Alg. 1 also yields trustworthiness measures by capturing the difficulty of fitting PDE instances or data points (Sec. 4).\\n- Illustrate the effectiveness (accuracy and computational cost) of this method in a diverse set of PDEs, NN architectures (e.g., MLPs and NOs), and problem types (solving a single or a parametric family of PDEs; interpolating known solutions; identifying PDE instances or data points that are difficult to fit, Sec. 5).\\n\\nProp. 3.2, in particular, is key as it shows that the limitations of previous approaches are not methodological but epistemological. **It is not an issue of how they solve the problem, but which problem they solve.** Indeed, Prop. 3.2 proves that weak solutions of BVPs are obtained by **solving (i) constrained learning problems with (ii) worst-case losses.** Hence, it is not enough to use either worst-case losses as in (Wang et al., 2022a; Daw et al., 2023) or adapt the loss weights as in (Wang et al., 2021a; Wang et al., 2022b; Maddu et al., 2022; McClenny & Braga-Neto, 2023), or even using constrained formulations (Lu et al., 2021b; Basir & Senocak, 2022). In contrast, Alg. 1 tackles both (i) and (ii) by (approximately) solving (SCL), the constrained learning problem with worst-case losses of Prop. 3.2, therefore also (approximately) solving (BVP).\\n\\n### Novelty of Alg. 1\\nAs noted above, Alg. 1 tackles both (i) and (ii) and is therefore the first method to actually tackle (SCL) and, consequently, (BVP) (see Prop. 3.2). The algorithm itself is based on constrained optimization duality, so that the updates in steps 8-9 are reminiscent of classical primal-dual or dual ascent methods going back to [Arrow et al., \\\"Studies in linear and non-linear programming,\\\" 1958]. Crucially, Alg. 1 combines such primal-dual methods with sampling-based approximations of worst-case losses (steps 4-7).\\n\\n- **Solving Constrained Learning Problems** Alg. 1 and other Lagrangian-based methods such as (Lu et al., 2021b; Basir & Senocak, 2022) tackle max-min problems of the form ($\\\\hat{\\\\text{D}}$-CSL) from Sec. 3.1. This is **not** the same as solving (SCL), a constrained optimization problem, due to non-convexity. We overcome this issue by posing a constrained *learning* problem (i.e., with statistical losses), namely (SCL), which allows us to use non-convex duality results from (Chamon & Ribeiro, 2020; Chamon et al., 2023) to provide approximation guarantees between (P-CSL) and ($\\\\hat{\\\\text{D}}$-CSL). Combined with results from (Cotter et al., 2019; Elenter et al., 2024), we can obtain convergence guarantees for (3), a variant of Alg. 1, towards a (probably approximately) near-feasible and near-optimal solution of (SCL).\\n\\n- **Worst-Case Losses** Using steps 8-9 with fixed collocation points (regardless of their distribution) is still not enough to obtain a solution of a BVP: Prop. 3.2 also requires the use of worst-case losses. Alg. 1 leverages Prop. 3.1 to replace these worst-case losses by statistical losses against specially designed distributions, namely the $\\\\psi_0$ in steps 4-7. It is the combination of these two properties (duality of constrained learning problems and worst-case formulation) that ensures that Alg. 1 seeks a weak solution of the BVP.\"}", "{\"comment\": \"I sincerely appreciate the thorough revisions and discussions, and I am sorry for my late response. I have revisited the manuscript and could understand it more deeply, thanks to the improved readability.\", \"i_have_changed_my_review_scores_accordingly\": [\"Soundness (1 to 2), Presentation (1 to 2), Contribution (1 to 2), Rating (3 to 5).\", \"I have also changed Confidence from 4 to 3.\", \"I would like to share my additional feedback. I understand it may be too late to share, but I hope it helps.\", \"Error bars: Thank you for the additional experiments. I encourage the authors to add error bars to all the remaining experiments because I would like to know whether performance gain is marginal or not and *when we should use the proposed method instead of previous ones*, a practically important point of view. I understand the main contribution of the paper is the theoretical part, but this point will underscores the practical relevance of the proposed theoretical structure and algorithm.\", \"(Lines 161-163) \\\"...they are the wrong problems in the first place.\\\": It would sound like an exaggeration to say that the previous approaches (PI) and (PII) are \\\"wrong\\\": It is difficult to prove these approaches are wrong ways to go, after all. Readers may be a bit confused, like an \\\"impossibility theorem\\\" has been proved in the paper. Instead, to avoid potential confusion and for consistency, I would recommend modifying the logic of the contributions like \\\"Our novel approach alleviates the requirements of ...hyperparameter tuning, etc..., compared to the previous methods, as a result of ...(key ideas of the proposed method)....\\\", which I think sounds more natural and is a standard logic we often find in the literature, avoiding potential overstatements and clarifying the key ideas.\", \"(Why I recommend this) I was a bit confused when I read around Lines 146-170: Did the authors address the known problems listed in the paragraph \\\"Limitations.\\\" or propose a completely new paradigm indicated in the beginning of Section 3?\", \"(Lines 207-210) \\\"While Prop. 3.1 describes a sufficient condition ... a necessary condition can be obtained ...\\\": I would recommend elaborating on this statement (possibly proving it), for clarity.\", \"How to choose $\\\\alpha$? Proposition 4.1 is an existence theorem and does not provide an exact $\\\\alpha$, to my understanding.\", \"(Lines252-355) \\\"It is often possible to replace ...\\\": I would recommend adding evidences for this empirical statement.\", \"(Lines 323-328): I appreciate that the differences between the proposed Lagrangean formulation (DIV) and previous approaches in (PI) are additionally highlighted in the revised version, improving the clarity of the paper.\", \"(Minor) In Line 163, a space is missing in \\\"...worst-case losses.Hence, is not...\\\"\", \"(Minor) In Line 525, a space is missing in \\\"is (virtually) independent of the choice of$u_\\\\theta$.\\\"\", \"I wish you the best of luck with your work.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> **W2:** I note that the paper is a full 10 pages long, however I think that some of the background could be shortened somewhat (especially in the introductory sections) in order to give the description of the new technique a bit more room to breathe.\\n\\n**Response:**\\nWe will include the above explicit list in the camera-ready following the reviewer's recommendations of shortening the background parts (moving details to the appendix).\"}", "{\"comment\": \"> **Q2:** How long did it take to compute those ground truth solutions relative to the methods compared in the paper? What I'd like to see here is a thoughtful comparison with conventional methods and points along the Pareto curve trading off compute time against accuracy (here, that means the fineness of discretization for, e.g., FEM).\\n\\n**Response:**\\nAs we mention in our response to Q1, we use published ground truth solutions so that our results are easy to compare to. Those source do not provide computation times. We do provide relative complexity comparisons in Appendix E for solving parametric families of BVPs. However, we compare the number of \\\"bottleneck, expensive operations\\\" (differential operator evaluations) instead of compute time, as the latter is difficult to replicate and highly dependent on hardware [as (McGreivy & Hakim, 2024) argue] and specific implementation. Still, we provide preliminary comput time results in our response to W1 (which we will include in the revised manuscript). We also reiterate that we do not believe the best use of Alg. 1 is solving single BVPs, but to target use cases for which classical methods are not well-suited (line 111).\"}", "{\"comment\": \"> **Q1:** I would be very pleased to hear the authors' response to my comments above, especially with respect to the proof of the reduction in conditioning number in Appendix B.\\n\\n**Response:**\\nWe hope that we have addressed the reviewer's concerns and questions above. We did not address the comment on the \\\"proof in Appendix B\\\" since we did not find any mention of Appendix B in the \\\"Weaknesses.\\\" We would be happy to elaborate on the proof if the reviewer has any concerns.\"}", "{\"comment\": \"> **W3:** At some point, the reviewer is wondering how common this is now considered when application papers now reference the constrained paper (NIPS) as say that they use PINNs + Constrained learning:\\n> \\n> https://www.sciencedirect.com/science/article/pii/S1385894724012919#b78\\n> \\n> Is the current paper really 'novel' (i.e. a contribution) if others now consider things commonplace. The reviewer is open to being convinced of the contributions if the literature search were broader and the comparisons were against the various adaptive sampling methods mentioned by Paris and others.\\n\\n**Response:**\\nWe assume that by \\\"constrained paper (NIPS)\\\" the reviewer is referring to (Chamon & Ribeiro, 2020).\\n\\nWe note that the paper referred to by the reviewer does not use constrained optimization techniques, but modifies the underlying architecture to satisfy the PDE and BCs. As such, it is a completely different approach than the one from the current work. This approach also makes it difficult to incorporate additional prior knowledge that (SCL) explicitly considers, such as structural (e.g., invariances) and observational (e.g., prior solutions and measurements) information.\\n\\nWhile we do use constrained learning as a tool in this work, it is not straightforward that (BVP), a functional, feasibility problem, can be written as (SCL), a finite dimensional, statistical, constrained optimization problem. This is only possible once we deploy Propositions 3.1 and 3.2. The latter, in fact, shows that there are limitations to this transformation in the sense that learning approaches can only determine fairly smooth solutions for high-dimensional state spaces (essentially, solutions in the Sobolev space $W^{(d+1) / 4,2}$). This can be an issue for large-scale dynamical systems, such as those found in smart grid applications, or when transforming higher-order PDEs in higher-dimensional first-order systems.\"}", "{\"comment\": \"Following the reviewers comments, we have updated our manuscript (see main comment for a list of modifications, currently marked in blue in the paper). We hope the reviewer finds these minor additions and changes make the paper clearer and address their concerns directly in the manuscript. We are happy to provide further details and clarifications if necessary.\"}", "{\"comment\": \"> **Q1:** What did you use to compute the ground truth solutions?\\n\\n**Response:**\\nAs we explain in Appendix D, the ground-truth solutions for the convection and reaction-diffusion PDEs were evaluated analytically; for the eikonal equation, we use the signed distance field reported in (Daw et al., 2023); the Burgers' and Navier-Stokes solutions were obtained from (Li et al., 2021) and the diffusion-sorption from (Takamoto et al., 2022).\"}" ] }
5KgKa96PUG
Exploring New Frontiers in Vertical Federated Learning: the Role of Saddle Point Reformulation
[ "Aleksandr Beznosikov", "Georgiy Kormakov", "Alexander Grigorievskiy", "Mikhail Rudakov", "Ruslan Nazykov", "Alexander Rogozin", "Anton Vakhrushev", "Andrey Savchenko", "Martin Takáč", "Alexander Gasnikov" ]
Distributed learning problems have gained significant popularity due to the increasing need for cluster training and the emergence of novel paradigms like Federated Learning (FL). One variant of FL, called Vertical Federated Learning (VFL), partitions data based on features across devices. The objective is to collectively train a model using the information available on each user's device. This paper focuses on solving the VFL problem using the saddle point reformulation via the classical Lagrangian function. We first demonstrate how this formulation can be solved using deterministic methods. But more importantly, the paper explores various stochastic modifications to adapt to practical scenarios, such as employing compression techniques for efficient information transmission, enabling partial participation for asynchronous communication, and utilizing coordinate selection for faster local computation. We show that the saddle point reformulation plays a key role and opens up possibilities to use mentioned extension that seem to be impossible in the standard minimization formulation. Convergence estimates are provided for each algorithm, demonstrating their effectiveness in addressing the VFL problem. Additionally, alternative reformulations of the VFL problem are investigated, and numerical experiments are conducted to validate the proposed methods' performance and effectiveness.
[ "convex optimization", "saddle point problem", "vertical federated learning" ]
https://openreview.net/pdf?id=5KgKa96PUG
https://openreview.net/forum?id=5KgKa96PUG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oGfIUpAo2m", "gBNmSFu8UF", "L64G1hlUns", "KkR1Pd81dW", "0ModWM2Nfh" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1729218269892, 1733116827540, 1730667755836, 1731122772889, 1730593436625 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14169/Reviewer_T3kx" ], [ "ICLR.cc/2025/Conference/Submission14169/Authors" ], [ "ICLR.cc/2025/Conference/Submission14169/Reviewer_jdC2" ], [ "ICLR.cc/2025/Conference/Submission14169/Reviewer_7pWJ" ], [ "ICLR.cc/2025/Conference/Submission14169/Reviewer_fhae" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores new methods for Vertical Federated Learning (VFL) by reformulating the learning process using a saddle point framework instead of the traditional minimization approach. The proposed approach in the deterministic case enables solving VFL problems with enhanced convergence guarantees in terms of the eigenvalue of the data matrix. The authors also propose stochastic algorithms tailored to this reformulation and suggest modifications to address practical challenges such as communication efficiency and computational cost by implementing compression, partial participation, and coordinate selection, respectively. The paper validates the proposed methods through numerical experiments.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposes a new minimax framework for the VFL problem. The method has a better complexity constant compared to accelerated gradient descent.\\n\\n2. The theoretical guarantee in the modification of quantization for the saddle point problem in VFL is novel.\", \"weaknesses\": \"1. Insufficient Preliminaries:\\nThe paper lacks clear explanations of key concepts such as Vertical Federated Learning (VFL) modeling and biased/unbiased compression. It would be easier to understand the paper if a \\\"Preliminaries\\\" section defining VFL and compression techniques were added before diving into the technical details. Especially:\\n\\n1.1 in Section 3.1, the introduction of compression techniques is missing, and key notations, such as $b^k$ appear without proper definition. This makes it difficult to understand how Algorithm 2 is formulated. The authors should include a notation table or glossary at the beginning of each major section or explicitly define each new symbol when it is first introduced.\\n\\n1.2 The reference to noise affecting $Z$ (line 154) is unclear, as the source of the noise and the conditions under which it occurs are not specified. It would improve clarity to explain the origin of the noise and provide an example of when it might arise.\\n\\n\\n1.3 In the experiments section, SSP (line 462) is introduced without any prior definition. Please add explanation to SSP to improve the clarity.\\n\\n\\n2. Disorganized Presentation: The structure in lines 205\\u2013217 is difficult to follow as specified as follows:\\n\\n2.1: it mainly focuses on previous work, but the purpose of mentioning it in this context is unclear. It would be clearer if the authors separated lines 242\\u2013245 into two parts: one for discussing the differences and merits compared to each mentioned previous approach, and the other for mathematically stating the relation between equation (5) and \\\\(gap^*\\\\).\\n\\n2.2: there are multiple claims without supporting mathematical proofs. For example, the statement:\\n\\\"Criterion (5) can also be used for unconstrained/unbounded problems. To do this, one can use the trick from (Nesterov, 2007) and introduce bounded sets\\\" in line 207, and \\\"one can show that in Theorem 2.2 ... we can use the criterion\\\" in line 210. It would strengthen the work to provide rigorous proofs for these claims, especially if they support the contribution of this research.\\n\\n2.3: the explanation in lines 242\\u2013245 regarding the convergence criterion using \\\\(g(x,y) = xy\\\\) (from lines 205\\u2013217) is confusing. Specifically, the claim that \\\\(gap^*(x,y,z) = 0\\\\) does not make sense because the example \\\\(g(x,y) = xy\\\\) does not contain a third variable \\\\(z\\\\).\\n\\n\\n4. Clarity, Missing Verbs and Poor Sentence Structure:\\n\\n4.1 The use of \\\"it\\\" in line 172 is ambiguous, making the meaning unclear. To enhance readability, consider explicitly stating what \\\"it\\\" refers to in this context. Replacing vague pronouns with precise references will help avoid confusion.\\n\\n4.2 In line 227-228, the sentence starting with \\\"one cannot...\\\" lacks a verb, which makes it incomplete. This is problematic because this sentence is critical for comparing the new results with prior work.\\n\\n4.3 in line 520-521, the sentence is incomplete (\\\"but only...\\\"), further complicating readability.\\n\\nI suggest the authors carefully proofread this work again for a better presentation.\\n\\n5. Inadequate Experiments:\\nSince the tasks involve classification problems, I recommend that the experiments report test accuracy to effectively evaluate the proposed method.\\n\\nOn the other hand, this work proposes a new method for practical scenarios, such as partial attendance and compression. However, the impact of these proposed methods would be more convincing if accompanied by experiments demonstrating their effectiveness compared to previous approaches.\\nOn the other hand, I did not find a discussion of related work in Vertical Federated Learning (VFL) that specifically addresses partial attendance and the use of quantization. Including a dedicated related work section summarizing prior research on these topics would provide better context and help position the contributions of this paper.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper studies the vertical federated learning (VFL) problem with the linear model by its convex-concave saddle point reformulation, which separates the data matrix $A$ and the loss function $\\\\ell$. Based on this reformulation, the authors propose an algorithm EGVFL for VFL based on the celebrated ExtraGradient method for convex-concave saddle point problems. They establish the convergence rate of EGVFL in terms of the duality gap, assuming $\\\\ell$ and the regularizer $r$ are convex and smooth. The paper also provides convergence guarantees for EGVFL variants with biased and unbiased communication compression, partial participation, and local steps.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The saddle point reformulation seems to be natural and well-motivated.\", \"When the model is linear, the authors established extensive convergence theory for the proposed algorithm and its extensions, accommodating key features such as communication compression, partial participation, and local steps. Besides, the convergence rate of EG improves upon GD in terms of $\\\\lambda_{\\\\max}(A^\\\\top A)$.\"], \"weaknesses\": [\"The proposed algorithms only have convergence guarantees for VFL with the linear model and the extension for nonconvex problems remains heuristic.\", \"In the experiments, only general-purpose optimizers are compared while existing algorithms specifically designed for VFL (e.g., [1] and its baselines) are completely missing.\", \"Figure 1 and Figure 2 only present the relative objective gap w.r.t. the number of iterations. This might be unfair since the per-iteration computational and communication costs of the proposed algorithms and baselines are different. For example, EG requires one extra communication per round than GD, and algorithms based on the saddle point reformulation also need to update auxiliary variables z and y.\", \"[1] Xie, Chulin, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang, and Bo Li. \\\"Improving privacy-preserving vertical federated learning by efficient communication with admm.\\\" In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pp. 443-471. IEEE, 2024.\"], \"questions\": [\"From the first page, the algorithm and its proof techniques seem to apply to general VFL problems, which could be misleading. I think the authors should clarify at the very beginning that the main theoretical results in this paper are only for linear models.\", \"In the experiments, could also show the relative objective gap w.r.t. communicated bits / total time?\", \"The sentence beginning with \\\"One cannot...\\\" in Line 227 appears to be broken.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a saddle point reformulation of the Vertical Federated Learning (VFL) problem, allowing for more efficient and privacy-preserving optimization compared to traditional minimization methods. The authors introduce a deterministic algorithm with several practical stochastic modifications that improve communication, handle asynchronous participation, and reduce computation costs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.Reformulating Vertical Federated Learning (VFL) as a saddle point problem is interesting and novel, it can offer an alternative to traditional minimization methods that could address VFL-specific challenges more effectively.\\n2.The paper presents comprehensive theoretic results.\\n3.The practical modifications for improving communication efficiency, asynchronous participation, and computational costs are well-aligned with real-world VFL challenges.\", \"weaknesses\": \"1. While the paper introduces several modifications to the basic deterministic algorithm, such as quantization, biased compression, and asynchronous participation, these are presented with high mathematical density and minimal illustrative examples. This makes it challenging for audience like me that are less familiar with saddle point methods and vertical federated learning to fully grasp each modification's practical implications and implementation nuances. I suggest the authors to enhance accessibility by maybe providing more intuitive explanations or visual illustrations (e.g., flow diagrams) of the modified algorithms.\\n\\n2. The experiments mainly focus on benchmark datasets with linear regression and neural network fine-tuning tasks. The paper would benefit from exploring additional VFL scenarios that could showcase the flexibility of the proposed approach in handling diverse model architectures or real-world vertical partitioning cases. I am uncertain whether the datasets and settings used in the experiments are standard for the VFL field. If they are not, it would be beneficial to include a wider variety of commonly used VFL benchmarks to strengthen the empirical validation.\", \"questions\": \"Similar to the above section: can the authors clarify whether the datasets and experimental settings used (e.g., mushrooms, a9a, w8a, MNIST, CIFAR-10; uniformly dividing between 5 clients) are considered standard benchmarks in the VFL field? If not, would the authors consider incorporating more diverse and commonly-used VFL benchmarks that reflect real-world vertical partitioning scenarios? This would help assess the generalizability of the proposed methods across typical VFL applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a saddle point reformulation for the vertical federated learning (VFL) and provides extragradient-based algorithms to solve the reformulation with the convergence rate $O(1/K)$ on the expected primal-dual gap of the reformulation. The authors also present some extension to non-convex models. The authors conduct numerical experiments in VFL by using linear regression with $l_2$-norm regularizations and using the ResNet18 neural network.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors start with the basic reformulation in Section 2, and then thoroughly consider several stochastic modifications such as quantization for effective communications, biased compression, partial participation for asynchronous communications and coordinate descent for reducing local computational cost. For each case, a modified algorithm is presented with the complete proof on the convergence rate $O(1/K)$.\", \"weaknesses\": \"There seems a gap between the considered linear models and non-convex models in the formulation (4) on page 3 and (7) on page 9. See Questions for the details.\", \"questions\": \"For solving convex-concave saddle point formulations with bilinear coupling terms (on $(x,z)$ and $y$ in the context of the paper), there are several well-studied methods such as the extragradient (or called mirror-prox) method [1] and the primal-dual hybrid gradient (PDHG) method [2,3]. [2,Section 4.1] mentions that the extragradient method has the same convergence rate $O(1/K)$.\\n\\nThen, [4] extends the PDHG method to the case of convex-concave saddle point problems, where the coupling term $\\\\Phi(x,z,y)$, not necessarily bilinear, is a continuous function with certain differentiability properties, convex in $(x,z)$ and concave $y$. [4] also proves the same convergence rate $O(1/K)$, which attains the lower bound proved in [5].\", \"question\": \"Could the authors possibly show some examples in VFL where modifies (7) on page 9 by a nonlinear coupling term $y^\\\\top(\\\\sum_{i=1}^n g_i(A_i,w_i)x_i-z)$, convex in $(x,w,z)$ and concave in $y$? I suppose that these case can also be solved with the same convergence rate $O(1/K)$ based on the literature in optimization.\\n\\n\\n[1] Prox-Method with Rate of Convergence $O(1/t)$ for Variational Inequalities Arkadi Nemirovski SIAM J. Optim. 15(1), 229-251 (2004)\\n\\n[2] A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging Antonin Chambolle, Thomas Pock J. Math Imaging Vis. 40(1), 120-145 (2011)\\n\\n[3] On the ergodic convergence rates of a first-order primal-dual algorithm Antonin Chambolle, Thomas Pock Math. Program. 159(1-2), 253-287 (2016)\\n\\n[4] A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems Erfan Yazdandoost Hamedani, Necdet Serhat Aybat SIAM J. Optim. 31(2), 1299-1329 (2021)\\n\\n[5] Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems Yuyuan Ouyang, Yangyang Xu Math. Program. 185(1-2), 1-35 (2021)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5K0fmGnFqP
A Stochastic Approach to the Subset Selection Problem via Mirror Descent
[ "Dan Greenstein", "Elazar Gershuni", "Ilan Ben-Bassat", "Yaroslav Fyodorov", "Ran Moshe", "Fiana Raiber", "Alex Shtoff", "Oren Somekh", "Nadav Hallak" ]
The subset selection problem is fundamental in machine learning and other fields of computer science. We introduce a stochastic formulation for the minimum cost subset selection problem in a black box setting, in which only the subset metric value is available. Subsequently, we can handle two-stage schemes, with an outer subset-selection component and an inner subset cost evaluation component. We propose formulating the subset selection problem in a stochastic manner by choosing subsets at random from a distribution whose parameters are learned. Two stochastic formulations are proposed. The first explicitly restricts the subset's cardinality, and the second yields the desired cardinality in expectation. The distribution is parameterized by a decision variable, which we optimize using Stochastic Mirror Descent. Our choice of distributions yields constructive closed-form unbiased stochastic gradient formulas and convergence guarantees, including a rate with favorable dependency on the problem parameters. Empirical evaluation of selecting a subset of layers in transfer learning complements our theoretical findings and demonstrates the potential benefits of our approach.
[ "Nonconvex Optimization", "Subset Selection", "Stochastic", "Mirror Descent", "Stochastic Mirror Descent" ]
Accept (Poster)
https://openreview.net/pdf?id=5K0fmGnFqP
https://openreview.net/forum?id=5K0fmGnFqP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vtlEnTQRC4", "utr1kM7kjB", "pqMv36iCf2", "igddsDH6ZI", "gaCZZraZBY", "fvUZrBVN7M", "XLfABu8FoL", "Vj8rtWOP3J", "SPFlH9Q348", "RhdkB5xihI", "RCUMHA0t8N", "NRa7wpSWRs", "KeVdpuEcy8", "GAiTJPzoUY", "5BJebPlnTm", "0GKXmJ7Uaf" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730522297866, 1732656416505, 1732112596219, 1732620246004, 1737523752020, 1732665924832, 1730592852819, 1732111803799, 1732113807563, 1732114217559, 1732691772109, 1730684538688, 1733752869056, 1732656369832, 1732578282628, 1732113709633 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_dyZD" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_cTYy" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_cTYy" ], [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_eDJC" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_cTYy" ], [ "ICLR.cc/2025/Conference/Submission6224/Area_Chair_FtXA" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ], [ "ICLR.cc/2025/Conference/Submission6224/Reviewer_dyZD" ], [ "ICLR.cc/2025/Conference/Submission6224/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a novel stochastic optimization algorithm to the *subset selection\\\" problem. The objective of this problem is to minimize a loss on the subsets of an ambient set, and this combinatorial problem is NP-hard. This paper proposed two stochastic approximations where the optimization variable instead becomes a probabilistic distribution over the subsets.\\n\\nCompare to previous works in stochastic subset selection problem, this paper only makes little assumption over the loss function, where it is only defined over the discrete subsets and thus not differentiable over some continuous space. \\n\\nTo approach this setting, the authors defined a stochastic gradient, where the derivative is taken over the parameters determining the probability distributions over the subsets. They further show that this stochastic gradient is 1) unbiased, 2) bounded, and 3) relatively convex (an extended notion of strong convexity). This properties allows the author to apply a stochastic mirror descent algorithm and utilize existing analysis in the literature [1] to prove convergence of their algorithm.\\n\\nFinally, the authors provided a practical application of their method for the transfer learning problem and provided experimental results.\\n\\n[1] Zhang S, He N. On the convergence rate of stochastic mirror descent for nonsmooth nonconvex optimization. arXiv preprint arXiv:1806.04781. 2018.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Overall, this paper is well-written and offers a neat solution to a difficult problem. While there are not any mind-blowing mathematical techniques and the prior literature [1] does quite a bit of heavy lifting, the proposed method has sufficient contribution to warrant a publication.\\n\\nThe biggest strength of this paper is the writing. The authors does a good job at guiding the reader through the various steps of their method and their claims are backed-up by technical results. There are no \\\"fluff\\\" in the paper in the sense that all results contribute meaningfully to the overall picture.\\n\\nI checked most of the proofs and they all seem to be correct up to constants or typos. I did not check Lemma 7.1 or 7.2, but I presume some variation of those claims must be true. The other proofs all look in good shape (in a first pass), and the techniques generally follow established literature so I don't expect there is any hidden \\\"surprises\\\".\\n\\nMy overall assessment is that paper is not ground-breaking but does the job. It solves a relevant problem with a well-constructed method. While the lack of mathematical novelty stops it from being a great paper, it should solidly pass the threshold for acceptance.\", \"weaknesses\": [\"While the overall quality of the paper is good, there are a few places where the author's intentions could be better clarified,\", \"The formulation of the mirror descent step in line 4 of Algorithm 1 is non-standard and I suggest the author to add a few lines of math to show how one can arrive at this line from a more standard form of mirror descent.\", \"Statements of Theorem 5.1 and 5.2 are rather difficult to parse. Initially, it wasn't immediately clear to me if smaller optimality gap $\\\\tau$ implies smaller $c^*$. The author should some discussion to guide the readers on how to better interpret the equation on on 291, and a plot could be nice here. Also, in the proof, line 802 suffers a similar issue and more detailed steps should be provided.\", \"Section 6 claimed the stochastic gradient is inspired by the REINFORCE RL algorithm. But I cannot see this connection at all. I think it would be better if the authors just directly offer the intuition through a few lines of math instead of wrapping the logic around a gray box.\", \"The experimental setup with transfer learning may be unfamiliar for many readers and a more gentle introduction to this problem is necessary. I personally had to Google this because the paragraph at the bottom of page 8 is on the shorter side. Also, I am still confused by the role of HPO. The authors should more explicitly state the objectives of these experiments and describe the terminologies for a broader audience.\", \"The numerical results in Table 1 does not show a significant advantage for the author's approach. But I don't view this as essential to the overall quality of the paper.\", \"Minor errors/typos:\", \"in Theorem 7.3, should be $(u-l)^2$.\", \"line 281, should be relaxation in $(P_k^c)$.\", \"line 673: should be Lemma B.1.1\", \"line 837, repeated $\\\\in$ symbol\"], \"questions\": [\"Regarding the stochastic relaxation from $(P)$ to $(P_k)$ and $(P_B)$, do you know the optimality gap of this relaxation? If this has already been studied in the literature, it would nice to have a brief discussion over those results.\", \"Just to make sure, would Lemma B.1 imply $L^* = L^*_k = L^*_B$? Also, I don't think this lemma is necessary for the proof of Theorem 5.1/5.2?\", \"What is Appendix E for?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you.\"}", "{\"comment\": \"Thank you for your positive and constructive review. We appreciate your thoughtful comments and have worked to address them in the following rebuttal and accompanying revisions to the manuscript.\\n\\n### **Nonstandard Mirror Descent Step** \\nThanks for pointing this out, we reformulated the Mirror Descent step to a more standard (equivalent) expression, which we hope improves readability.\\n\\n### **Statements of Theorems 5.1 and 5.2** \\nWe updated the statements of these theorems to better reflect the relationship between $\\\\tau$ and $c$, clarifying that a decrease in $\\\\tau$ leads to a decrease in $c$. Additionally, we included a remark to further elucidate this relationship.\\n\\n### **Referencing the REINFORCE Estimator** \\nFollowing your suggestion, we have removed the references to the REINFORCE algorithm and replaced them with an alternative intuition, which we believe aligns better with the context of our work \\u2013 thanks.\\n\\n### **Transfer Learning (TL) and Hyperparameter Optimization (HPO)** \\nTo address your feedback, we have added a brief explanation of TL to the opening paragraph of Section 8. \\nRegarding HPO, it is a standard practice used to optimize parameters such as the learning rate during model training. In our case, we optimized $\\\\alpha$ and $\\\\tau$. To ensure fairness, we allowed HPO for the benchmark algorithms as well.\\n\\n### **Numerical Results** \\nWe have conducted additional synthetic experiments, which we hope better demonstrate the advantages of our approach and provide further empirical support for our claims.\\n\\n### **Optimality Gap from $(P)$ to $(P_k)$ and $(P_B)$, and from $L^{*}$ to $L_k$ and $L_B$**\", \"we_clarify_the_relationship_between_these_formulations\": \"- The minimum of $(P)$, $L^{*}$, equals the infimum of $(P_k)$, $L_k$, as shown in Lemma B.1. \\n- The minimum of $(P_B)$, $L_B$, provides a lower bound on both $L^{*}$ and $L_k$. \\n\\nHowever, without additional restrictions on the choice of $\\\\ell$, the optimality gap between $(P_B)$ and $(P)$ can be unbounded. Consider the following adversarial extension of $\\\\ell$ to subsets of cardinality different than $k$ \\u2013 set $\\\\ell(C) = m$ for a subset $C$ with cardinality different from $k$, and let $m \\\\to -\\\\infty$. In this scenario, any set of weights that assigns positive probability to $C$ results in the problem value becoming a constant fraction of $m$, leading to an unbounded gap. Specifically, this applies when all element weights are restricted to $(0,1)$, as every subset $C$ retains a positive probability of selection.\\n\\n### **Inclusion of Lemma B.1** \\nWhile Lemma B.1 is not strictly necessary for the proofs of Theorems 5.1 and 5.2, we believe it serves an important role by providing insight into the transition from $(P)$ to $(P_k)$ and $(P_B)$. For this reason, we propose retaining it in the appendix to preserve its explanatory value. We have removed the sentence between Lemma B.1 and the proof of Theorem 5.1, which incorrectly stated that Lemma B.1 is used in the proof of Theorems 5.1 and 5.2.\\n\\n### **Appendix E** \\nAppendix E establishes a theoretical foundation for efficiently calculating the Mirror Descent step, which we believe could be valuable for those implementing our algorithm. \\n\\nThank you again for your thoughtful feedback and positive evaluation of our work. We are confident that the revisions and clarifications we have made will address your concerns and further strengthen the manuscript.\"}", "{\"comment\": \"Thanks for the detailed response!\\n\\nI might be missing something obvious, but I don't see how remark 3.1 fixes my concern. \\nFor instance, suppose $\\\\mu(x)=x^2$, then $f(x)=-ax^2$ is $(\\\\rho,\\\\mu)$ RWC only for $a<\\\\rho$. So the assumption that $f$ is $(\\\\rho,\\\\mu)$ RWC implicitly puts restrictions on what $f$ could possibly be, it is not accurate to say that this paper places absolutely no assumptions on the losses\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I see, thanks. I will retain my positive score\"}", "{\"summary\": \"The authors address the subset selection problem, a fundamental task in machine learning and computer science, by introducing a stochastic formulation for selecting minimum-cost subsets in a black-box setting. Here, only the metric value for a subset is accessible, limiting the information available for the selection process. The authors propose a two-stage framework where subset selection is decoupled into an outer selection component and an inner cost evaluation component. The paper parameterizes the subset distribution using a decision variable, optimized via Stochastic Mirror Descent (SMD). The proposed distributions enable constructive, closed-form, unbiased stochastic gradient formulas with favorable convergence guarantees. The approach is empirically validated on a subset selection task involving layer selection in transfer learning, showcasing both theoretical and practical benefits.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents an approach to the subset selection problem by proposing a stochastic formulation optimized using Stochastic Mirror Descent (SMD). Traditional subset selection techniques often involve deterministic methods, such as greedy algorithms or combinatorial optimization, which lack the flexibility and exploration capabilities needed for high-dimensional or complex subset spaces.\\n\\n2. They derive closed-form, unbiased stochastic gradients for the subset distribution parameters, which makes the method practical and computationally efficient, especially compared to approaches requiring approximate gradients.\\n\\n3. Overall, the paper is well-structured and clear, with a logical progression from the problem formulation to the proposed approach, theoretical analysis, and empirical evaluation. \\n\\n4. The significance of this work is substantial, both theoretically and practically. The subset selection problem is fundamental in machine learning, impacting areas like feature selection, model compression, neural architecture search, and sensor placement. By proposing a stochastic approach, this paper introduces a new direction for subset selection that emphasizes flexibility, exploration, and computational efficiency, which is especially relevant in high-dimensional or complex problem spaces.\", \"weaknesses\": \"1. While the layer selection experiment in transfer learning provides an interesting and relevant use case, the empirical evaluation is limited to a single task. Subset selection has a wide range of applications across machine learning, including feature selection, model pruning, sensor placement, and combinatorial optimization. Evaluating the proposed method on a broader range of tasks (as stated above) would strengthen the paper and better demonstrate the method\\u2019s generalizability.\\n\\n2. There are various existing subset selection methods, both deterministic (e.g., greedy algorithms, combinatorial optimization) and stochastic (e.g., Monte Carlo methods, stochastic optimization techniques). The paper lacks a comparison with these methods, which would provide readers with more context on the advantages and disadvantages of the proposed approach and help highlight its unique contributions.\\n\\n3. The paper does not clearly address how the assumptions of smoothness and favorable structure in the subset metric or cost function impact the theoretical results\\u2019 robustness in settings with irregular cost functions. Extending the analysis to cover more realistic conditions or discussing the limitations of these assumptions would provide a clearer understanding of the method\\u2019s applicability in diverse practical scenarios.\\n\\n4. The choice of specific distributions for sampling subsets is not well-justified in the paper. While the authors mention that the distribution parameters are optimized, they do not explain why these particular distributions were chosen or how they might perform compared to other candidate distributions. This choice is critical, as different distributions could have substantial effects on the subset sampling\\u2019s efficiency and coverage. A clearer explanation of the rationale behind the selected distributions or a comparison with alternative options would strengthen the argument and clarify the trade-offs involved in the design.\\n\\n5. Despite the use of stochastic methods, the proposed approach may still involve significant computational costs, particularly when repeatedly sampling and evaluating subsets over many iterations. A more thorough discussion on the scalability, computational complexity, and potential approaches to mitigate these challenges would make the paper more practically relevant for large-scale applications.\", \"questions\": \"1. What are the specific properties of the chosen distributions that make them suitable for subset selection? Are these distributions known to have properties that benefit the subset selection problem?\\n\\n2. How does the approach handle non-smooth or noisy subset metrics? Would there be significant degradation in performance, or could the method be adapted to handle irregular cost functions?\\n\\n3. Can you discuss the relative benefits of each stochastic formulation (fixed vs. expected cardinality)? Could the authors discuss specific scenarios where one formulation might be preferable over the other?\\n\\n4. What potential approximations could make this approach more feasible for large-scale problems? Have the authors considered any approximation techniques, such as subsampling or early stopping, that could reduce the computational cost? Discussing potential modifications or approximations would provide insights into how the method could be adapted for resource-constrained environments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive and thoughtful review. We appreciate your feedback and aim to address your questions and concerns both in this rebuttal and in the accompanying revisions to the paper.\\n\\n1. **Assumptions on $\\\\ell (\\\\cdot)$:** \\n Please first note that the requirement that $f (\\\\cdot)$ will be RWC with respect to $\\\\mu$ imposes no restrictions on $\\\\ell (\\\\cdot)$ itself, as $f (\\\\cdot)$ remains RWC for any choice of $\\\\ell$ and any $1$-strongly convex $\\\\mu$ (as noted in Remark 3.1). We understand that this can be described better and so we revise accordingly that the RWC constant $\\\\rho$ does depend on $\\\\max\\\\limits_{C \\\\in \\\\mathcal{C}} \\\\ell(C)$. \\n Regarding the bounds on $\\\\ell(C)$, our approach indeed requires knowledge of some lower and upper (not necessarily tight) bounds on $\\\\ell(C)$, which might be nontrivial depending on setup \\u2013 we have revised accordingly, thanks. \\n\\n2. **Experiments:** \\n We appreciate and agree with your suggestion regarding the experiments. New synthetic experiments designed to clearly demonstrate our approach in more classical problems were added. Due to space constraints, these experiments are included in the appendix, and we refer the readers to the relevant section in the appendix from the Experiments section.\\n\\n3. **Theorem 7.3:** \\n We fully agree that the proof of Theorem 7.3 is a direct application of Hoeffding\\u2019s inequality \\u2013 Its inclusion is not because of the theoretical derivation, rather, it serves an important role in the overall narrative of the paper. Specifically, our intended contribution can be summarized as follows:\\n - Subset Selection can be transformed into an equivalent stochastic optimization problem.\\n - The stochastic problem can be approximated arbitrarily well.\\n - We propose an algorithm for the approximate stochastic problem that converges to solutions satisfying necessary optimality criteria at a known rate.\\n - Any solution to the approximate stochastic problem can be used to efficiently sample subsets with values arbitrarily close to---or better than---the stochastic solution.\\n\\n In this context, Theorem 7.3 implements the fourth step, bridging the stochastic problem and the original subset selection task. We can further emphasize its purpose in the paper if you believe that it is necessary, but we see it as an integral part of the main text and therefore, think it should not be omitted.\\n\\n4. **Smoothing Techniques:** \\n Thanks for pointing this out. We added a brief mention of alternative smoothing techniques, but unfortunately, due to space limitations, we have no space to elaborate further.\\n\\n5. **Lower Bound Dependence on $n$:** \\n Regarding your question on the lower bound dependence on $n^{2.5}$---this is an excellent question. Unfortunately, we do not have a definitive answer at this time. The question becomes even more complex because our algorithm converges to necessary optimality conditions (which is the standard type of result in nonconvex optimization) rather than to global optimality.\\n\\n6. **Minor Comments:** \\n We agree with both of your minor comments and have addressed the issues accordingly.\\n\\nThank you again for your feedback and constructive comments. We hope these changes address your concerns and further clarify the contributions of our work.\"}", "{\"comment\": \"### **Noisy Subset Metrics**\\nThe impact of noisy subset metrics is an excellent question that warrants further investigation. Based on our understanding, the convergence results should hold under the following conditions: \\n1. The gradient estimators remain unbiased despite the noise. We expect this to hold when the noise is independent of the sampling process, though this requires further investigation. \\n2. The noise is bounded, ensuring the sampled subset values remain within known lower and upper bounds. \\n\\n### **Fixed vs. Expected Cardinality** \\nBroadly speaking, the choice between fixed and expected cardinality depends on whether cardinality is a strict constraint or a recommendation. For example: \\n- In a Subset Sum problem, subsets of size $k-2$ are not feasible, making fixed cardinality essential. \\n- In feature selection, subsets with approximately $k$ elements may suffice, making expected cardinality more practical. \\n\\nMoreover, the expected cardinality setting can encode fixed cardinality by assigning a loss slightly above the upper bound to subsets with undesired sizes. However, this can slow convergence due to the prevalence of low-quality samples. \\n\\nLastly, the expected cardinality setting scales much better with $k$, offering significant advantages in large-scale applications. We have included a concise discussion of this comparison in the paper. \\n\\nThank you again for your constructive feedback and for highlighting these important considerations. We hope our response and the revisions to the paper address your concerns. \\n\\n### **References** \\n[1] Adeel Pervez, Phillip Lippe, and Efstratios Gavves. Scalable subset sampling with neural conditional Poisson networks. *In The Eleventh International Conference on Learning Representations,* 2022.\"}", "{\"comment\": \"Thank you for the positive and constructive feedback provided by all reviewers. We have worked to address your comments and suggestions through both this rebuttal and revisions to the manuscript. Below, we summarize the major changes and clarifications made in response to your feedback:\\n\\n**Experimental Enhancements and Results** \\nWe conducted additional synthetic experiments on more classical subset selection problems to better demonstrate the performance and robustness of our Subset Selection algorithm. These new experiments complement the Transfer Learning experiment, which remains in the main text, while the synthetic results have been added to the appendix. \\nThe new synthetic experiments illustrate how favorable structure strongly influences the convergence rate of our algorithm. This behavior is tied to how informative the subset loss $\\\\ell(C)$ is to the loss of other subsets that share elements with $C$.\\n\\n**Theoretical Clarifications and Adjustments** \\nWe revised the Mirror Descent step to use a more standard and equivalent notation, improving clarity. \\nThe statements of Theorems 5.1 and 5.2 were updated to clarify that a decrease in $\\\\tau$ leads to a decrease in $c$. A new remark was added to further elucidate this relationship. \\nWe replaced mentions of the REINFORCE algorithm with alternative intuitions to better align with the context of our work. \\n\\n**Subset Distribution and Sampling** \\nThe distribution classes used in our method satisfy key criteria, including representation of optimal subsets, efficient gradient computation, and favorable optimization properties. We acknowledge that alternative distributions satisfying these criteria could achieve similar results and have flagged this as an area for future exploration. \\n\\n**Other Clarifications** \\nWe included a brief explanation of TL in Section 8 to better contextualize its relevance. \\n\\n\\n**Phrasing Modifications** \\nSome sentences were slightly modified to enhance conciseness in the purpose of freeing up space to accommodate for the revisions following the reviewers\\u2019 comments and suggestions. These modifications do not alter the content or intent of the paper.\"}", "{\"comment\": \"Thank you.\"}", "{\"summary\": \"This paper studies the subset selection problem. A two stochastic formulations of the\\nproblem are proposed, as well as a computable approximations, and solved via mirror descent, and \\na $1/\\\\sqrt{T}$ rate of convergence to a stationary point is proven. The methods are tested \\nempirically in a transfer learning task.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"- It is rather surprising to me that the approach is seemingly able to\\nget around making assumptions on the underlying structure of the losses that \\nthat one would see in the combinatorial bandits literature --- ie, that \\nthe losses take the form $\\\\langle\\\\ell_t, w\\\\rangle= \\\\sum_i \\\\ell_{ti}\\\\mathbb{I}(w=1)$, whereas in this \\nwork it is possible to have losses defined in such a way that every possible subset maps to independent values. \\n\\n- I really like the idea of second approach, in which the condition that $\\\\\\\\|w\\\\\\\\|_1\\\\le k$ is relaxed \\nto $\\\\\\\\|w\\\\\\\\|_1\\\\le k$ *on average*, in exchange for a more readily computable update\\n\\n- The paper is generally well-written and well explained\", \"weaknesses\": [\"Lines 33/34: we make *no assumptions* regarding the loss function other than having lower and upper bounds\", \"This is not true; the results rely on relative weak convexity of $f$ wrt $\\\\mu$, and moreover any assumptions made about $\\\\mu$\", \"will imply certain restrictions on the curvature of $f$ as well. For instance, the stationarity measure is related to\", \"the more traditional one for $\\\\mu$ with Lipschitz gradients, and hence the results are not necessarily\", \"meaningful unless $f$ is RWC wrt some smooth $\\\\mu$, which implicitly limits what $f$ can be\", \"The approach critically relies on the approximations to the stochastic problems, and choosing a valid approximation constant\", \"seems to require prior knowledge of $\\\\max_C \\\\ell(C)$ and $\\\\min_C \\\\ell(C)$. So we must not only assume that $\\\\ell$ is bounded, but\", \"that the bounds are *known* to the user. This is arguably a very strong assumption to make.\", \"The experiments feature an interesting *application* of the subset selection problem, but given that this paper is about a new approach to a rather fundamental problem, it would have been more illuminating to see results in more standard subset selection problems such as Vertex Cover and comparisons against existing methods. As it stands, the experiments currently in the paper do not shed any light on how well the new approach works for subset selection, because it's not clear to me how important subset selection really is for transfer learning.\", \"Theorem 7.3 seems unnecessary --- it appears to just be a straight-forward application of Azuma-Hoeffding.\", \"There is a long history of using a smoothing of the loss function to relax convexity assumptions; perhaps there should be more discussion on this front in the related work\", \"### Minor Details\", \"Line 199: This is a very atypical way to write the mirror descent update; is there\", \"a reason the update is not written in terms of the Bregman divergence?\", \"It looks like you might be using \\\\Cref in some of your appendix names, you might want to consider using something more based on the standard \\\\ref call, like \\\"Proofs of Section~ \\\\ ref{sec:gradientestimators}\\\", in these places. This will still put an in-text reference with hyperlink, but now the index in the pdf table of contents will correctly say \\\"Proofs of Section 7\\\".\"], \"questions\": \"- As mentioned above, the approach seems to work even when the loss is defined in such a way that\\nevery subset maps to a distinct and unrelated loss. This suggests that the problem setting should be significantly harder \\nthan e.g. the combinatorial bandits setting, where we just need to find all the \\\"best\\\" indices. In fact, this difficulty seems to be reflected directly in the upperbound: the numerator is a factor of $n^2$ larger than what is attainable in the combinatorial bandits setting ($\\\\sqrt{n}$). Do you expect that this is the optimal dependence?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces a novel stochastic optimization framework for the subset selection problem The authors propose two innovative stochastic formulations within a black-box setting, where only subset metric values are accessible. By leveraging Stochastic Mirror Descent (SMD) to optimize the distribution parameters, the study provides significant theoretical contributions.\\n\\nThe paper is noted for its clear and well-organized presentation, strong theoretical foundations, and originality of approach. Methodological innovations and comprehensive theoretical analysis are identified as major strengths. Additionally, the empirical validation in transfer learning effectively demonstrates the practical relevance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers provided constructive feedback highlighting areas for improvement, such as expanding empirical evaluations to include more standard subset selection tasks, comparing with existing deterministic and stochastic methods, justifying the choice of subset distributions, and discussing computational scalability. The authors addressed these comments by expanding their experiments, including comparisons with baseline methods, providing clearer explanations of their methodological choices, and discussing strategies for enhancing computational scalability.\"}", "{\"comment\": \"Thank you for your question. I believe that the missing piece here is that $f(\\\\cdot)$ is not an arbitrary function we receive as input \\u2013 it is constructed as an expectation over the values of $\\\\ell(\\\\cdot)$, where the expectation is with respect to the distribution induced by the decision variables.\\n\\nIt is the distribution family of choice that makes $f(\\\\cdot)$ relatively weakly convex with a certain constant, and the bounds of $\\\\ell(\\\\cdot)$ affect this constant. We show it on a small example below.\\n\\nThis example considers the choice without replacement scenario, with $n=3$, $k=2$, and\\n$\\\\ell(\\\\cdot)$ defined as $\\\\ell(\\\\\\\\{1,2\\\\\\\\}) = 0$, $\\\\ell(\\\\\\\\{1,3\\\\\\\\}) = 1$, $\\\\ell(\\\\\\\\{2,3\\\\\\\\}) = 2$.\\n\\nSo, the probability of choosing the subset $\\\\\\\\{0,1\\\\\\\\}$ given a weights vector $(x_1,x_2,x_3)^T$ is:\\n\\n$x_1 \\\\cdot \\\\frac{x_2}{1-x_1} + x_2 \\\\cdot \\\\frac{x_1}{1-x_2}$.\\n\\nSimilarly, the probabilities of choosing the subsets $\\\\\\\\{0,2\\\\\\\\}$ and $\\\\\\\\{1,2\\\\\\\\}$ are:\\n\\n$x_1 \\\\cdot \\\\frac{x_3}{1-x_1} + x_3 \\\\cdot \\\\frac{x_1}{1-x_3}$,\\n\\nand\\n\\n$x_2 \\\\cdot \\\\frac{x_3}{1-x_2} + x_3 \\\\cdot \\\\frac{x_2}{1-x_3}$,\\n\\nrespectively.\\n\\nTherefore, the expectation function $f(\\\\cdot)$ is given by:\\n\\n$f(x_1,x_2,x_3) = \\\\left(x_1 \\\\cdot \\\\frac{x_2}{1-x_1} + x_2 \\\\cdot \\\\frac{x_1}{1-x_2}\\\\right) \\\\cdot 0 + \\\\left(x_1 \\\\cdot \\\\frac{x_3}{1-x_1} + x_3 \\\\cdot \\\\frac{x_1}{1-x_3}\\\\right) \\\\cdot 1 + \\\\left(x_2 \\\\cdot \\\\frac{x_3}{1-x_2} + x_3 \\\\cdot \\\\frac{x_2}{1-x_3}\\\\right)\\\\cdot 2$.\", \"some_calculations_show_that_the_hessian_is_given_by\": \"$\\\\begin{bmatrix}\\n\\\\frac{2x_{2}}{\\\\left(1 - x_{0}\\\\right)^{3}} & 0 & \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}} + \\\\frac{x_{0}}{\\\\left(1 - x_{0}\\\\right)^{2}} + \\\\frac{1}{1 - x_{0}} \\\\\\\\\\\\\\\\\\n0 & \\\\frac{4x_{2}}{\\\\left(1 - x_{1}\\\\right)^{3}} & 2 \\\\left(\\\\frac{1}{\\\\left(1 - x_{1}\\\\right)^{2}} + \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}}\\\\right) \\\\\\\\\\\\\\\\\\n\\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}} + \\\\frac{x_{0}}{\\\\left(1 - x_{0}\\\\right)^{2}} + \\\\frac{1}{1 - x_{0}} & 2 \\\\left(\\\\frac{1}{\\\\left(1 - x_{1}\\\\right)^{2}} + \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}}\\\\right) & \\\\frac{2 \\\\left(2x_{1} + x_{0}\\\\right)}{\\\\left(1 - x_{2}\\\\right)^{3}}\\n\\\\end{bmatrix}$.\\n\\nNext, we will see that $f(\\\\cdot)$ is $(\\\\rho^c_k, \\\\mu)$-RWC with $\\\\rho_k^c$ defined as in Corollary 7.1, and $\\\\mu(x) = \\\\frac{1}{2}\\\\Vert x \\\\Vert^2$. It should be noted that the same proof below would have worked for the two other choices of $\\\\mu(\\\\cdot)$ suggested in Corollary 7.1 as well.\\n\\nAssume that $(x_1,x_2,x_3)^T \\\\in \\\\Delta_3^c$ for some choice of $c \\\\in \\\\left(0, \\\\frac{1}{3}\\\\right)$. It follows that the $\\\\rho$ from Corollary 7.1 is:\\n\\n$\\\\rho_k^c = c^{-2} n \\\\left(k^2 + k - 1\\\\right) \\\\cdot \\\\max_i |\\\\ell(C_i)| = c^{-2} \\\\cdot 3 \\\\cdot (4 + 2 - 1) \\\\cdot 2 = 30 c^{-2}$.\\n\\nPlease note that $\\\\rho_k^c$ depends on $\\\\max_i |\\\\ell(C_i)|$. In terms of the theoretical convergence results, this is the main point in which the choice of $\\\\ell(\\\\cdot)$ comes into play.\\n\\nTo prove that $f(\\\\cdot)$ is $(\\\\rho^c_k, \\\\mu)$-RWC over $\\\\Delta_3^c$, it is sufficient to show that:\\n\\n$\\\\nabla^2 f(x) + \\\\rho \\\\nabla^2 \\\\mu(x) \\\\succeq 0$,\\n\\nwhich is equivalent, by our choice of $\\\\mu$, to:\\n\\n$\\\\nabla^2 f(x) + \\\\rho I \\\\succeq 0$.\\n\\nTo prove that, it is sufficient to show that the resulting matrix on the left-hand side is diagonally dominant. In our case, since all elements of the Hessian are nonnegative for $x \\\\in \\\\Delta_3^c$, it is sufficient to show that:\\n\\n1. $\\\\rho^c_k \\\\ge \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}} + \\\\frac{x_{0}}{\\\\left(1 - x_{0}\\\\right)^{2}} + \\\\frac{1}{1 - x_{0}}$,\\n2. $\\\\rho^c_k \\\\ge 2 \\\\left(\\\\frac{1}{\\\\left(1 - x_{1}\\\\right)^{2}} + \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}}\\\\right)$,\\n3. $\\\\rho^c_k \\\\ge \\\\frac{x_{0}}{\\\\left(1 - x_{0}\\\\right)^{2}} + \\\\frac{1}{1 - x_{0}} + \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}} + 2 \\\\left(\\\\frac{1}{\\\\left(1 - x_{1}\\\\right)^{2}} + \\\\frac{x_{2}}{\\\\left(1 - x_{2}\\\\right)^{2}} + \\\\frac{1}{1 - x_{2}}\\\\right)$.\\n\\nIn all these cases, it can be shown that the right-hand side of the inequalities is upper-bounded by $10c^{-2}$, which is smaller than $\\\\rho_k^c$.\\n\\nTherefore, $f(\\\\cdot)$, which was constructed from $\\\\ell(\\\\cdot)$, is $\\\\rho_k^c$-RWC with respect to $\\\\mu(\\\\cdot)$. Corollary 7.1 shows that for every loss function $\\\\ell(\\\\cdot)$, one can construct an expectation loss function $f(\\\\cdot)$ in this way, and that the resulting function $f(\\\\cdot)$ is $(\\\\rho,\\\\mu)$-RWC with respect to the three options of $\\\\mu(\\\\cdot)$ outlined in the Corollary. It should be noted that the value of $\\\\rho$ depends on $\\\\ell(\\\\cdot)$, as it is a function of $\\\\max\\\\limits_{C \\\\in \\\\mathcal{C}^k} \\\\left|\\\\ell(C)\\\\right|$.\"}", "{\"comment\": \"Thank you for the detailed response. I find the clarification on TL very helpful and the new numerical results on the synthetic tasks a valuable addition to this paper.\\n\\nI will maintain my score of **6**. And if I could give a more precise score, the revision elevates the paper to a **6+**.\"}", "{\"comment\": \"**Due to the maximum comment length limit, we split this comment into two parts.**\\n\\nThank you for your positive and detailed review. We appreciate your thoughtful comments and hope that this rebuttal, alongside the accompanying revisions, will address your concerns and further clarify our contributions.\\n\\n### **Experiments** \\nWe added three sets of synthetic experiments comparing our approach to Monte Carlo, 1-Flip and Projected Gradient Descent methods (the latter is included in one of the three setups). These experiments complement the Transfer Learning (TL) experiment: while the TL experiment demonstrates the applicability of Subset Selection to diverse learning tasks, the new synthetic experiments highlight the performance and robustness of our Subset Selection algorithm in various scenarios.\\n\\n### **The Effect of Favorable Structure** \\nAs shown in the new synthetic experiments, favorable structure strongly influences the convergence rate of our algorithm. This effect can also be seen in the gradient estimator formula: if a sampled subset $C$ incurs a high loss, the probabilities of the elements in $C$ being selected decreases, and vice versa. This behavior reflects an implicit assumption of our algorithm: that the subset loss provides meaningful information about the influence of its elements. \\n\\nAt a minimum, for any element $e \\\\in C$, the subset loss $\\\\ell(C)$ provides information about one subset containing $e$ (specifically $C$ itself). When this assumption holds more strongly\\u2014e.g., $\\\\ell(C)$ provides insights into losses of other subsets sharing elements with $C$\\u2014the algorithm\\u2019s performance improves. We have included a concise version of this discussion in the paper.\\n\\n### **Subset Distribution Choices**\", \"the_distribution_classes_we_use_meet_three_key_criteria\": \"- They can represent the distribution that selects the optimal subset with probability 1 (for the weighted choice without replacement distribution, this holds in limit due to the open feasible set). \\n- They allow efficient computation of gradient estimators. \\n- We establish favorable properties for the induced optimization problems, such as bounds on the singular values of the Hessian and second-moment bounds on the gradient estimator. \\n\\nIt seems plausible that similar results could be achieved for any distribution satisfying these criteria, although further investigation would be required to verify it.\\n\\n### **Scalability and Performance Considerations** \\nRegarding the practicality and relevance to large-scale applications, below we provide detailed performance enhancing adaptations. \\n\\n#### **Theory-Preserving Adaptations**: \\n- Choice without replacement: \\n Sampling a permutation instead of a subset and adapting the gradient estimator accordingly can reduce the computational cost of gradient evaluation. While we have not yet proven this rigorously, we believe the gradient estimator bounds would still hold. This approach may, however, increase the gradient\\u2019s variance. \\n\\n- Caching subset evaluations: \\n Storing values of previously evaluated subsets in a hash table can reduce redundant calculations, especially when the algorithm is close to convergence and repeatedly samples the same subsets. \\n\\n#### **Heuristic Adaptations**: \\n- Combining sampling techniques: \\n For large $k$, we can integrate our approach for sampling subsets with expected cardinality $k$ with the method in [1]. This would yield subsets of exact size $k$, albeit at the cost of slightly biased gradient estimators. Given the design of the sampling process in [1], we expect the bias of the gradient estimator to remain low. \\n\\n- Objective evaluation for hyperparameter tuning: \\n Direct evaluation of the objective function is computationally expensive for large-scale problems. Instead, sampling can be used to estimate the objective efficiently. With upper and lower bounds available, the sampling average converges at a sub-Gaussian rate. \\n\\n- Early stopping criterion: \\n If a sufficiently good subset is encountered during sampling, we can terminate the process early. This approach avoids unnecessary computation and provides a practical stopping condition for the algorithm.\\n\\nWe aim to incorporate some version of these considerations into the paper, balancing the available space with their importance.\"}" ] }
5Jc7r5aqHJ
Energy-based Backdoor Defense Against Federated Graph Learning
[ "Guancheng Wan", "Zitong Shi", "Wenke Huang", "Guibin Zhang", "Dacheng Tao", "Mang Ye" ]
Federated Graph Learning is rapidly evolving as a privacy-preserving collaborative approach. However, backdoor attacks are increasingly undermining federated systems by injecting carefully designed triggers that lead to the model making incorrect predictions. Trigger structures and injection locations in Federated Graph Learning are more diverse, making traditional federated defense methods less effective. In our work, we propose an effective Federated Graph Backdoor Defense using Topological Graph Energy (FedTGE). At the local client level, it injects distribution knowledge into the local model, assigning low energy to benign samples and high energy to the constructed malicious substitutes, and selects benign clients through clustering. At the global server level, the energy elements uploaded by each client are treated as new nodes to construct a global energy graph for energy propagation, making the selected clients' energy elements more similar and further adjusting the aggregation weights. Our method can handle high data heterogeneity, does not require a validation dataset, and is effective under both small and large malicious proportions. Extensive results on various settings of federated graph scenarios under backdoor attacks validate the effectiveness of this approach.
[ "Federated Learning", "Graph Learning" ]
Accept (Oral)
https://openreview.net/pdf?id=5Jc7r5aqHJ
https://openreview.net/forum?id=5Jc7r5aqHJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywAIJgTIIs", "ymL87vovDw", "siDDIBWWfJ", "s4N12kpJk4", "rigdmCQpJr", "rdzy7iKJ3M", "rWgqqppSUj", "qUszbszUSe", "ka5VilqR4z", "kFxBfFM1tK", "hORGCdJ9tn", "fbsAGP3j8y", "fJcVu18vcr", "aqabYPL7Rl", "ab1pBOonpE", "LoNirY9bOl", "KvjEYRbOgt", "GztopfuVJW", "ExfuQokT5f", "B2ZVVkghcI", "AswT16g5wM", "9eFiaGMViy", "7o0CPOOmne", "5vZ0Tb6JxS", "4Kxspa1meP", "3gOEiPbxZ5" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730684633165, 1732176068247, 1732415157568, 1732164626610, 1732166032211, 1730440872127, 1732200627661, 1732410334904, 1732167217696, 1732165915513, 1732415749202, 1732198456230, 1737524020186, 1732163192396, 1732160926232, 1734760570777, 1732202194960, 1732410310653, 1732198486698, 1732165575386, 1729655590832, 1732165085515, 1730737811540, 1732167659432, 1732203836434, 1732164766296 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_8A86" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_i7KZ" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_i7KZ" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_8A86" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_eV6Q" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_Z6ow" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Area_Chair_SSLn" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_eV6Q" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_8A86" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_eV6Q" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_Z6ow" ], [ "ICLR.cc/2025/Conference/Submission10018/Reviewer_eV6Q" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ], [ "ICLR.cc/2025/Conference/Submission10018/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents FedTGE, a new method to protect Federated Graph Learning from backdoor attacks by analyzing energy patterns in data distribution. The method works by identifying and separating benign and malicious clients and adjusts how their data contribute to the final FL model. The authors claim that FedTGE performs better than current methods, which can handle high data heterogeneity, does not require a validation dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-This topic is very important to the community, considering the backdoor defense methodology is developing.\\n\\n-Well written and interesting.\\n\\n-Thorough empirical results over a plethora of FL methods.\", \"weaknesses\": \"I have some concerns about the assumption and evaluation of this paper below.\", \"questions\": \"1. The paper assumes that calculating the energy distribution of clients can effectively tell malicious and benign clients apart. However, this method relies on the energy model accurately capturing the real data distribution. If the energy model fails to do so, especially in noisy or structurally complex data, the assumption is doubtful.\\n\\n2. While the paper claims the method is effective under non-IID scenarios, I am not confident if the evaluation under non-IID-louvain setting is enough. For example, there can be node label distribution skew and node feature skew. More non-IID conditions should be evaluated.\\n\\n3. TESP is used to adjust aggregation weights to defend against malicious clients. How to prevent from incorrectly including benign clients?\\n\\n4. The computational complexity of the energy model and similarity propagation might become a bottleneck in large-scale cases, the authors should discuss this issue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I have reviewed the rebuttal and am satisfied that the authors have adequately addressed my main concerns. Additionally, I have reviewed the opinions of the other reviewers, and while most raised minor concerns about the experiments, there is general agreement that the proposed method is both innovative and well-executed, and the paper is clearly presented. Considering these strengths, I believe this paper stands out as an excellent submission for ICLR. I am inclined to maintain my score and clearly support its acceptance.\"}", "{\"comment\": \"### Dear Reviewer 8A86,\\n\\nWe are deeply grateful for your insightful comments and generous support of our research. Your constructive feedback on the scalability and adaptability of FedTGE has greatly enhanced the clarity and rigor of our work. It has been a privilege to address your concerns and refine our manuscript based on your suggestions. Once again, we sincerely thank you for your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"[Part 1/2] Response to the Reviewer 8A86\", \"comment\": \"Dear Reviewer 8A86\\n\\nThank you for recognizing the contribution of our work to the community. Your valuable input has been instrumental in helping us refine and enhance our research. Please find our detailed responses below:\\n\\n> `Question 1`: The reasonableness of the assumption\\n\\nWe understand your concern regarding this issue. Our energy model is essentially an energy-based model that integrates the ability of GNNs to capture data structures while attaching energy information to the graph data. Samples that align well with the data modeled by GNNs are assigned low energy, whereas those that do not are assigned high energy. \\n\\nIn datasets with low noise and simple structures, these triggers significantly alter the structural information of the data, serving as a prominent signal for FedTGE to assign higher energy. For example, in the IID scenarios, FedTGE achieves excellent defense performance. Furthermore, in datasets with moderate noise, FedTGE remains effective : **(1)** TESP facilitates repeated energy propagation, narrowing the energy distribution gap among selected clients, which further enhances the clustering effect of TEDC. The threshold filtering mechanism also ensures that mistakenly selected malicious clients are assigned lower aggregation weights or even excluded entirely. **(2)** For triggers, to achieve effective attacks in noisy environments, they must undergo corresponding adjustments, such as adopting more complex trigger structures or incorporating features that deviate from the context. These characteristics assist TEDC and TESP in filtering them out effectively. Our extensive experiments also validate that under the same noise levels, FedTGE consistently outperforms the baseline methods. Below we present a portion of the results from the feature skew experiments.\\n\\n**renyi** **(Non-iid-feature-skew with alpha = 0.5 and a malicious proportion of 0.3)** :\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 62.97, 44.79, 53.88 | 84.00, 31.60, 57.80 | 94.05, 48.23, 71.14 |\\n| FLTrust | 60.47, 55.56, 58.02 | 83.68, 32.17, 57.93 | 94.55, 55.36, 74.96 |\\n| FoolsGold | 66.38, 43.21, 54.79 | 86.10, 35.35, 60.73 | 94.02, 44.58, 69.30 |\\n| FLAME | 60.30, 66.50, 63.40 | 83.48, 62.34, 72.91 | 92.85, 62.32, 77.59 |\\n| Trim Median | 62.97, 57.44, 60.21 | 84.10, 34.23, 59.17 | 93.89, 55.23, 74.56 |\\n| Sageflow | 65.98, 56.37, 61.18 | 87.99, 60.24, 74.12 | 93.81, 69.98, 81.90 |\\n| **FedTGE** | 63.69, 88.98, **76.34** | 86.53, 71.29, **78.91** | 94.40, 85.70, **90.05** |\\n\\nAfter introducing a certain amount of noise, FedTGE still demonstrates significantly stronger defense performance compared to the baselines. We will also provide theoretical proofs to support this in the future.\\n\\n---\\n\\n> `Question 2`: More non-IID conditions\\n\\nThank you for your question. In the revised manuscript, we have provided more experimental reports under non-iid scenarios in Appendix B. \\n\\n**renyi** **(Non-iid-label-skew with alpha = 0.5 and a malicious proportion of 0.3)** :\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 61.72, 73.89, 67.80 | 85.88, 11.28, 48.58 | 94.34, 39.54, 69.93 |\\n| FLTrust | 63.12, 59.66, 61.39 | 82.63, 8.98, 45.80 | 94.58, 31.25, 62.92 |\\n| FoolsGold | 65.22, 63.98, 64.60 | 86.86, 10.91, 48.88 | 91.31, 61.05, 77.18 |\\n| FLAME | 63.27, 70.08, 66.68 | 84.28, 14.43, 49.36 | 90.27, 42.64, 66.46 |\\n| Trim Median | 60.90, 58.35, 59.62 | 86.39, 7.91, 47.15 | 94.24, 30.72, 62.48 |\\n| Sageflow | 65.42, 44.91, 55.17 | 86.09, 0.35, 43.22 | 94.59, 26.51, 60.55 |\\n| **FedTGE** | 64.69, 88.91, **76.60** | 86.98, 43.65, **65.32** | 94.79, 86.79, **90.79** |\\n\\nSome defense methods based on model updates lose most of their effectiveness in the label-skew scenario on the Pubmed dataset. In these scenarios, client data becomes biased toward specific labels, causing model updates to focus on majority class labels and making attacks harder to defend against. In contrast, FedTGE adjusts at the data level and does not rely directly on model updates, allowing it to maintain stronger defensive capabilities than the baselines.\\n\\n---\"}", "{\"title\": \"[Part 3/3] Response to the Reviewer eV6Q\", \"comment\": \"> `Question 4`: The verification attacking method needs more details, and other attacks should be further evaluated.\\n\\nThank you for pointing this out. In the revised manuscript, we have included more details about the attacking method and experimental results for different trigger patterns in Appendix B.We have selected results for some diverse trigger structures and presented below.\\n\\n**GTA (Non-iid-louvain with a malicious proportion of 0.3):**\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 78.54, 27.96, 53.25 | 86.86, 11.70, 49.28 | 93.42, 24.86, 59.14 |\\n| FLTrust | 78.21, 43.01, 60.61 | 86.36, 36.05, 61.21 | 94.21, 59.26, 76.74 |\\n| FoolsGold | 79.56, 25.60, 52.58 | 89.15, 16.70, 52.87 | 94.68, 40.56, 67.62 |\\n| FLAME | 77.54, 67.13, 72.34 | 85.45, 40.51, 62.98 | 93.28, 50.49, 71.89 |\\n| Trim Median | 79.49, 33.06, 56.26 | 86.63, 14.58, 50.60 | 93.55, 27.39, 60.47 |\\n| Sageflow | 79.95, 29.49, 54.72 | 89.04, 16.70, 52.87 | 94.54, 54.69, 74.62 |\\n| **FedTGE** | 80.23, 75.23, **77.73** | 87.56, 55.34, **71.45** | 94.39, 72.37, **83.38** |\\n\\n**WS (Non-iid-louvain with a malicious proportion of 0.3):**\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 77.31, 74.74, 76.03 | 85.34, 81.97, 83.66 | 93.82, 63.03, 78.42 |\\n| FLTrust | 76.29, 93.51, 84.90 | 86.43, 74.67, 80.55 | 93.87, 63.19, 78.53 |\\n| FoolsGold | 82.91, 84.72, 83.82 | 87.02, 78.28, 82.65 | 94.09, 68.29, 81.19 |\\n| FLAME | 82.25, 84.23, 82.24 | 86.26, 70.37, 78.32 | 93.37, 58.83, 76.10 |\\n| Trim Median | 77.31, 74.74, 76.03 | 85.13, 71.39, 78.26 | 92.95, 58.41, 75.68 |\\n| Sageflow | 81.81, 89.40, 85.61 | 86.10, 86.34, 86.22 | 94.00, 67.24, 80.62 |\\n| **FedTGE** | 81.76, 91.98, **86.87** | 86.23, 90.19, **88.21** | 94.45, 85.69, **90.07** |\\n\\nThe GTA algorithm injects triggers with distinct shapes, such as star or ring patterns, which exhibit high attack efficacy but suffer from low stealthiness. In contrast, the WS algorithm based on the Watts-Strogatz small-world model introduces triggers that typically possess realistic network characteristics resulting in higher stealthiness but at the cost of relatively lower attack effectiveness. Nevertheless, FedTGE is able to maintain strong defense performance even under these attack scenarios.\\n\\n\\n\\n[1] : Yang, Y.; Li, Q.; Jia, J.; Hong, Y.; and Wang, B. 2024. Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses.\\n\\n[2] : Xiao, Z.; Zhen, X.; Liao, S.; and Snoek, C. G. M. 2024. Energy-Based Test Sample Adaptation for Domain Generalization.\\n\\n[3] : Hristozov, S.; Wettermann, M.; and Huber, M. 2022. A TOCTOU Attack on DICE Attestation.\"}", "{\"summary\": \"This paper addresses the issue of backdoor attacks in Federated Graph Learning and proposes a defense method based on Topological Graph Energy. The method injects structural distributional knowledge into the model. At the client level, it models energy to differentiate between benign and malicious samples. While at the server level, it constructs a global energy graph for energy propagation, effectively identifying and excluding malicious clients. Experimental results demonstrate that FedTGE performs well under various attack proportions and in heterogeneous scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper is the first to introduce energy-based models into backdoor defense in FGL, effectively addressing the complexities of trigger injection locations and diverse attack forms by learning the energy structure distribution of the data through constructing an energy-based model, which leverages the unique topological characteristics of graph data.\", \"The methodology is well explained and structured. In particular, the authors provide a detailed step-by-step breakdown of their approach to help in understanding the core mechanisms of FedTGE. Furthermore, the framework diagram clearly explains the energy propagation and clustering processes, making it easier to grasp the interactions between different modules.\", \"By constructing an energy graph and propagating energy across clients, FedTGE overcomes the limitations of traditional federated defense methods struggle to adapt well of graph data. This approach combines structural knowledge to enable more fine-grained and accurate identification of malicious entities. Thus enhancing the robustness and adaptability of the defense system across diverse scenarios.\"], \"weaknesses\": [\"Although the experiments demonstrate the superiority of FedTGE over traditional methods, the paper lacks sufficient discussion of existing backdoor defense techniques in traditional federated learning. For instance, RFA appears in the performance table but is not comparatively analyzed in the introduction. The authors are supposed to clarify the shortcomings of these methods in the given scenario.\"], \"questions\": \"I noticed that the authors conducted extensive experiments under the \\\"Renyi\\\" setting and achieved excellent results. However, to my knowledge, there are other effective attack methods in FGL, such as GTA. Could the authors provide relevant experiments to verify the effectiveness of their approach against such attacks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NO.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response to our rebuttal\", \"comment\": \"### Dear Reviewer Z6ow,\\n\\nThank you for your kind feedback on our method and for recognizing the novelty of using energy models in backdoor defense.\\n\\n**Application Scope**: Our method is effective for the current scenarios and highly extensible to graph-level tasks, making it applicable across a wide range of use cases. Furthermore, our approach demonstrates strong scalability, ensuring its feasibility in handling larger datasets and federated scenarios with more clients.\\n\\n**Deployment Details**: We have provided implementation details in the manuscript (Sec 5.1) and anonymous code to facilitate understanding and reproducibility. The experiments use NVIDIA GeForce RTX 3090 GPUs as the hardware platform, coupled with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz. Should you have any specific aspects of the deployment process in mind that require further elaboration, we would be more than happy to clarify or expand upon them.\\n\\nIf you have any additional questions or suggestions, please let us know. We are committed to addressing all concerns to improve our work further.\\n\\nThank you again for your thoughtful review and support. We hope this will contribute to a higher evaluation of our submission.\"}", "{\"comment\": \"I have carefully reviewed the rebuttal regarding Questions 3 and 4, while also considering the responses to Questions 1 and 2, and I believe the authors have effectively addressed all my concerns. I now consider this paper to meet the acceptance standards of ICLR and will accordingly increase my score.\"}", "{\"comment\": \"For weakness 1, it would be better to show the connection between the different energy distributions with node distribution. In addition, it seems that there is no appendix D in the submitted file.\\n\\nFor weakness 2, the authors are suggested to refine the statement since the authors have not clearly clarified the paper only focuses on the node classification problem in Section 1.\\n\\nFor weakness 3, thanks for the response. However, this explanation is not actually clear why FedFreq fails, and why it cannot identify malicious clusters.\"}", "{\"title\": \"[Part 2/3] Response to the Reviewer eV6Q\", \"comment\": \"> `Question 1`: Does the proposed energy-based estimation work on traditional FL or other domains?\\n\\nThank you for this question. The core of the FedTGE method lies in modeling and adjusting the energy of samples (i.e., their unnormalized probabilities). Based on this insight, energy-based estimation can be extended to a variety of fields. This includes traditional FL, such as backdoor defense, where it can similarly assign lower energy to benign samples. However, since images lack the structural properties inherent to graphs, its effectiveness in this domain may be limited. Additionally, it can also be utilized to enhance model generalization. For instance, [2] is an excellent related work that improves model generalization by adjusting the energy of test samples.\\n\\n---\\n\\n> `Question 2`: Does the proposed scheme work for untargeted attacks?\\n\\nAn untargeted attack does not focus on specific target nodes, instead, its impact typically spans the entire graph or a large subset of samples, often significantly disturbing the statistical distribution of the input data. Although our method is specifically designed for targeted attacks, it can theoretically also be applied to untargeted attacks. To demonstrate this, we conducted a simple experiment under the DICE [3] with 10% perturbed edges setting, using the misclassification rate (%) as the evaluation metric. The results are as follows:\\n\\n| Methods | Pubmed | photo | Cs |\\n| :---------------: | :----------: | :----------: | :----------: |\\n| **Clean** | 15.26 \\u00b1 0.35 | 25.14 \\u00b1 0.78 | 9.78 \\u00b1 0.45 |\\n| **DICE** | 19.11 \\u00b1 1.12 | 30.28 \\u00b1 1.33 | 15.78 \\u00b1 1.12 |\\n| **DICE + FedTGE** | 16.21 \\u00b1 0.37 | 26.23 \\u00b1 0.52 | 9.83 \\u00b1 0.34 |\\n\\n---\\n\\n\\n\\n> `Question 3`: Does the proposed scheme work for other types of graph learning tasks, such as graph classification and link prediction?\\n\\nThank you for this question. We have also applied the proposed method to graph classification and link prediction to further verify its scalability. The results are shown below, where the metrics are reported following the original manuscript's A, R, and V:\\n\\n**graph classification**\\n\\n| Methods | NCI1 (A,R,V) | PROTEINS_full (A,R,V) | DD (A,R,V) |\\n| :---------------------------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 81.22, 17.59, 49.41 | 73.84, 48.61, 61.23 | 63.84, 16.88, 40.36 |\\n| Trimmed Mean | 80.78, 51.59, 66.19 | 74.21, 48.60, 61.41 | 62.04, 11.43, 36.74 |\\n| Trim Median | 79.51, 30.70, 55.11 | 74.77, 48.61, 61.69 | 64.11, 18.33, 41.22 |\\n| RSA | 78.70, 20.15, 45.43 | 73.34, 47.89, 60.62 | 66.35, 11.13, 38.74 |\\n| FLAME | 77.62, 37.19, 57.41 | 73.60, 50.12, 61.86 | 66.18, 32.25, 49.22 |\\n| G^2uard | 78.70, 42.26, 60.48 | 74.40, 49.21, 61.81 | 64.51, 13.03, 38.77 |\\n| Sageflow | 76.67, 27.48, 52.08 | 74.79, 47.38, 61.09 | 66.11, 14.07, 40.09 |\\n| FreqFed | 76.60, 40.93, 58.77 | 76.05, 52.78, 64.42 | 61.24, 14.46, 37.85 |\\n| Certified Defense (css24) | 79.64, 64.19, 71.92 | 74.04, 53.16, 63.60 | 64.39, 42.77, 53.58 |\\n| FedTGE | 80.13, 64.89, **72.51** | 75.29, 61.23, **68.26** | 65.89, 47.62, **56.76** |\\n\\n\\n**link prediction**\\n\\nThe attack methods for link prediction are highly similar to those for node classification, as both involve injecting fake nodes or subgraphs to alter the contextual information of local graph data. Therefore, FedTGE is theoretically fully applicable to link prediction tasks.\"}", "{\"comment\": \"### Dear Reviewer i7KZ,\\nWe greatly appreciate your thoughtful feedback and unwavering support for our work. Your detailed suggestions have played a crucial role in refining our study. We deeply value the effort and expertise you have dedicated to this review process, and we remain truly grateful for your guidance and support.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"> `Weakness 1 & 2`\\n\\nThank you for pointing out issues that were not clearly explained. We apologize for any inconvenience caused by the delay in updating it and the submission revision has been uploaded now. The visualization of the energy distribution before and after energy calibration is provided in Appendix E. We have explicitly stated in the contributions section that our work focuses on node classification and is extendable to graph classification.\\n\\n\\n\\n> `Weakness 3`: Reasons for the performance degradation of FedFreq\\n\\nFedFreq assumes that the low-frequency components of model parameter weights represent the overall trend and global characteristics of weight changes, while high-frequency components are associated with noise and local details. However, the weight distribution of graph data is highly dependent on the graph's topological structure (e.g., node degree distribution and community structure). In non-Euclidean spaces, the graph topologies of different clients can vary significantly, leading to completely different spectral distributions even among benign clients. This variability makes it difficult to accurately distinguish between malicious and benign updates based on low-frequency features. As a result, HDBSCAN may misclassify the weights of benign clients as malicious updates. \\n\\n\\n\\nIn the revised manuscript, we have added a visualization analysis of the low-frequency components of FedFreq in Appendix F. We use Principal Component Analysis (PCA) to visualize the distances of low-frequency components across different clients. It can be observed that defense methods based on low-frequency components (model updates) are not suitable for GCNs and may even filter out benign clients.\", \"title\": \"Thank you for your response to our rebuttal\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"I read this rebuttal and appreciate for authors' response. Introduced engergy models into backdoor defence is interesting and the experiments show its feasibility. Minor suggestion is that the application scope and depoyment details can be provided more.\"}", "{\"comment\": \"Dear Reviewer Z6ow\\n\\nWe sincerely thank the reviewers for their valuable and constructive feedback, which has helped us identify ways to improve the clarity and quality of our manuscript. We have carefully addressed each comment, and our detailed responses are provided below.\\n\\n> `Weakness 1`: Visualization of the energy distribution\\n\\nTo address your suggestion regarding the visualization of energy distribution during the clustering process, we have included kernel density estimation plots in Appendix E of the revised manuscript. These plots show the energy distribution before and after energy calibration for different trigger types. And the results demonstrate that our method effectively models node energy across various datasets and trigger types, clearly distinguishing between the energy distributions of malicious and benign clients.\\n\\n---\\n\\n> `Weakness 2`: Complexity analysis\\n\\nThank you for this valuable suggestion. In the revised version of the manuscript, we have included a complete complexity analysis in appendix. The computational complexity of FedTGE can be formalized as:\\n\\n**TEDC**:\\n$$\\nO((3E \\\\\\\\times F + 2D \\\\\\\\times F + 2D + P_{BN}) \\\\\\\\times EN)\\n$$\\nwhere $P_{BN}$ represents the parameters of the Batch Normalization (BN) layer in the model, $E$ denotes the number of edges in the dataset, $D$ and $F$ stand for the number of nodes and features respectively, and $EN$ refers to the epoch count used for energy calibration. In non-dense graphs, $E$ can be considered proportional to $D$, $P_{BN}$ can be neglected. Therefore, the formula can be further simplified as:\\n$$\\nO(E \\\\\\\\times F \\\\\\\\times EN)\\n$$\\nThis indicates that the TEDC module has a linear relationship with the number of edges $E$, which demonstrates its scalability.\\n\\n**TESP**:\\n$$\\nO(E\\\\\\\\times L+ N^2+K\\\\\\\\times E + N\\\\\\\\times M)\\n$$\\nWe only analyzed the complexity of the similarity propagation part. Here, $N$ represents the number of clients, $K$ and $L$ denote the number of propagation layers and the length of the energy distribution (i.e., the number of node), respectively. It is worth noting that the TESP module treats clients as nodes and the connections between clients as edges. Since $N$ and $E$ is usually a relatively small constant, the above formula can be further simplified as:\\n$$\\nO(D+K)\\n$$\\nThis indicates that TESP module has a linear relationship with the number of nodes $D$, which demonstrates its suitability for large-scale datasets. For transmission costs, FedTGE does not introduce significant additional transmission pressure. Apart from transmitting the parameters of the backbone, FedTGE additionally transmits the energy distribution of each client to the server. Its length $L$ matches the number of nodes $D$ in the dataset, making it a completely acceptable transmission cost.\\n\\n---\\n\\n> `Weakness 3`: More defense performance under various trigger types\\n\\nWe agree that evaluating with additional trigger types would provide a more comprehensive assessment of the system's resilience. In the revised manuscript, we will provide experimental reports on more diverse trigger structures in Appendix B. The results show that FedTGE maintains higher defensive performance than the baseline across various trigger types. Below we present partial experimental results for the trigger type gta.\\n\\n**GTA (Non-iid-louvain with a malicious proportion of 0.3):**\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 78.54, 27.96, 53.25 | 86.86, 11.70, 49.28 | 93.42, 24.86, 59.14 |\\n| FLTrust | 78.21, 43.01, 60.61 | 86.36, 36.05, 61.21 | 94.21, 59.26, 76.74 |\\n| FoolsGold | 79.56, 25.60, 52.58 | 89.15, 16.70, 52.87 | 94.68, 40.56, 67.62 |\\n| Trim Median | 79.49, 33.06, 56.26 | 86.63, 14.58, 50.60 | 93.55, 27.39, 60.47 |\\n| Sageflow | 79.95, 29.49, 54.72 | 89.04, 16.70, 52.87 | 94.54, 54.69, 74.62 |\\n| **FedTGE** | 80.23, 75.23, **77.73** | 87.56, 55.34, **71.45** | 94.39, 72.37, **83.38** |\\n\\n---\\n\\n> `Weakness 4`: Selection of the threshold \\n\\nFedTGE utilizes the advanced FINCH technique for clustering the energy distribution of clients, which typically does not require pre-setting a threshold. However, in our TESP component, a threshold is applied to determine whether the energy similarity between two clients is sufficient for energy propagation. This threshold is designed to **enhance the robustness** of FedTGE. as shown in Appendix D. Even if a malicious client is mistakenly selected, it is difficult to maintain a high energy distribution similarity with benign clients, thus further helping to exclude malicious clients. Therefore, it is typically set to a relatively high value, such as the default value of 0.8 in the paper, which has been shown to achieve effective defense.\", \"title\": \"[Part 1/1] Response to the Reviewer Z6ow\"}", "{\"metareview\": \"This paper tackles the challenge of backdoor attacks in Federated Graph Learning and also introduces a defense strategy based on Topological Graph Energy. The proposed approach incorporates structural distributional knowledge into the model. At the client level, it uses energy modeling to distinguish between benign and malicious samples, while at the server level, it creates a global energy graph for energy propagation, effectively detecting and filtering out malicious clients. Experimental results also demonstrate FedTGE's effectiveness.\", \"strength\": \"1. The proposed methods are interesting and novel.\\n\\n2. The paper's structure and writing is good.\", \"weakness\": \"Although the authors discussed their method's complexity and claimed they can be extended to large-scale datasets, no further experiments for it to demonstrate the proposed methods effectiveness on large-scale datasets.\\n\\nIn summary, this paper is a good paper with a novel method design, comprehensive evaluations and good writing. All the reviewers show positive attitude towards this paper. And I also suggest to accept it as a spotlight.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers and authors discuss the papers' additional evaluations, scenarios, complexities, etc. These discussions also improve this paper.\"}", "{\"comment\": \"Thanks for the detailed response. The reviewer has no further comments and will increase the score.\"}", "{\"comment\": \"I have carefully reviewed the authors\\u2019 rebuttal. The additional discussions and experiments for fairness of energy model non-IID settings appear convincing and adequately address my concerns regarding Questions 1 and 2. A minor suggestion is to include the discussion part for Question 1 in the paper if it has not already been added.\"}", "{\"comment\": \"> `Question 3`\\n\\nThe modifications required to apply FedTGE to graph classification focus on replacing the perturbation of node subgraphs with the perturbation of entire graphs. Additionally, the defense mechanism of Certified Defense (CCS 24) may have inherent limitations, which could account for its inferior performance compared to FedTGE: (1) **Loss of Graph Information**: Certified Defense splits the test graph into multiple non-overlapping subgraphs for individual predictions. This process can result in significant information loss, potentially affecting prediction accuracy. For example, in domains like proteins or biomolecules, arbitrary partitioning of graphs may fail to produce correct labels due to the critical interdependencies within the graph structure. (2) **Lower Robustness**: Certified Defense cannot directly guarantee that the trigger will be entirely contained within a single subgraph during the partitioning process. If the trigger is distributed across multiple subgraphs, it could influence the predictions of several subgraphs, making the voting mechanism appear fragile and less reliable. These issues ultimately result in Certified Defense achieving overall performance that is inferior to FedTGE.\", \"title\": \"Thank you for your response to our rebuttal\"}", "{\"title\": \"[Part 1/3] Response to the Reviewer eV6Q\", \"comment\": \"Dear Reviewer ev6Q\\n\\nWe sincerely thank you for your insightful feedback. We have provided detailed responses to your questions, and we hope that these clarifications and revisions meet your expectations and potentially merit a higher evaluation.\\n\\n> `Weakness 1`: Visualization and More non-iid scenarios\\n\\nThank you for your valuable feedback regarding the details of the experiments. To address this issue comprehensively, we have provided detailed visualizations of the energy distribution for each client before and after clustering, as well as additional experimental reports for non-iid scenarios. These have included in Appendix E and Appendix B, respectively. Below we present additional experimental results for non-IID scenarios:\\n\\n**renyi** **(Non-iid-feature-skew with alpha = 0.5 and a malicious proportion of 0.3)** :\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 62.97, 44.79, 53.88 | 84.00, 31.60, 57.80 | 94.05, 48.23, 71.14 |\\n| FLTrust | 60.47, 55.56, 58.02 | 83.68, 32.17, 57.93 | 94.55, 55.36, 74.96 |\\n| FoolsGold | 66.38, 43.21, 54.79 | 86.10, 35.35, 60.73 | 94.02, 44.58, 69.30 |\\n| FLAME | 60.30, 66.50, 63.40 | 83.48, 62.34, 72.91 | 92.85, 62.32, 77.59 |\\n| Sageflow | 65.98, 56.37, 61.18 | 87.99, 60.24, 74.12 | 93.81, 69.98, 81.90 |\\n| **FedTGE** | 63.69, 88.98, **76.34** | 86.53, 71.29, **78.91** | 94.40, 85.70, **90.05** |\\n\\n**renyi** **(Non-iid-label-skew with alpha = 0.5 and a malicious proportion of 0.3)** :\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 61.72, 73.89, 67.80 | 85.88, 11.28, 48.58 | 94.34, 39.54, 69.93 |\\n| FLTrust | 63.12, 59.66, 61.39 | 82.63, 8.98, 45.80 | 94.58, 31.25, 62.92 |\\n| FoolsGold | 65.22, 63.98, 64.60 | 86.86, 10.91, 48.88 | 91.31, 61.05, 77.18 |\\n| FLAME | 63.27, 70.08, 66.68 | 84.28, 14.43, 49.36 | 90.27, 42.64, 66.46 |\\n| Sageflow | 65.42, 44.91, 55.17 | 86.09, 0.35, 43.22 | 94.59, 26.51, 60.55 |\\n| **FedTGE** | 64.69, 88.91, **76.60** | 86.98, 43.65, **65.32** | 94.79, 86.79, **90.79** |\\n\\n---\\n\\n> `Weakness 2`: Omission of Standard Defenses \\n\\nWe appreciate your valuable feedback and would like to point out the omission of certain standard defense methods in our initial submission. We have carefully reviewed [1] and have incorporated the methods you mentioned as references in the revised manuscript. What we want to clarify is that FLTrust relies on the availability of a clean and trustworthy dataset at the server side. However, the size or feasibility of **such a dataset has not been clearly established in graph domain**. And this limitation was the reason it was not it was not included in the original experimental design. \\n\\n[1] is an excellent concurrent work focusing on backdoor attacks and defenses in the context of graph classification. The Certified Defense method discussed in [1] primarily involves dividing test graphs into multiple subgraphs and determining the final label through majority voting. However, this approach does not fully adapt to node classification tasks. In contrast, our proposed FedTGE performing node-level energy adjustment is also applicable to defense tasks in graph classification scenarios. For relevant experimental results, please refer to our response to Question 3.\\n\\n---\\n\\n> `Weakness 3`: Reasons for the performance degradation of FedFreq \\n\\nThank you for highlighting this concern. FedFreq is indeed an advanced method in traditional federated learning, relying on clustering and filtering malicious clients based on their uploaded model updates. However, backdoor attack patterns in FGL pose greater challenges compared to those in traditional FL. Specifically, the randomness in the location and shape of trigger injections, as well as their distribution, makes traditional model update-based defense methods, such as Trimmed Mean and FoolsGold, less effective in FGL.\\n\\nFor FedFreq, it relies on model updates to identify malicious clients and completely excludes the model parameters of these clients. However, this approach often fails to accurately identify malicious clusters in FGL. This inevitably leads to the loss of knowledge learned by some clients, resulting in significant performance degradation in certain scenarios. A similar issue can also be observed in the DnC algorithm.\\n\\n---\"}", "{\"summary\": \"The authors propose an energy-based backdoor defense for Federated graph learning. Experimental results on the node classification task under the mentioned backdoor attacks are provided, with comparisons to several defenses, and performance gain are observed in iid and non-iid scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The estimation of using energy distribution to recognize malicious clients seems to be novel.\\n\\n2. Comparisons with some advanced defenses are provided and show the advantages.\", \"weaknesses\": \"1. Key details on the different energy distributions of malicious clients and others are not provided. In addition, the influence of this factor in the non-iid scenario needs further verification.\\n\\n2. Some standard defenses are missing, such as FLAME\\uff0c FL Trust, and G^2uard Fl. In addition, the authors should pay attention to their statements carefully, since the proposed work actually is not the first to conduct backdoor attacks in FGL. For example, \\\"Distributed backdoor attacks on federated graph learning and certified defenses\\\" published in CCS 2024 is a good work that should be compared.\\n\\n3. It is strange that FedFreq has an unexpectedly low performance in the experiments which are not consistent in their own work, and the authors may clarify the reason.\", \"questions\": \"Besides the weakness, the authors may consider the following points to enhance the paper.\\n\\n1. Does the proposed energy-based estimation work on traditional FL or other domains? \\n\\n2. Does the proposed scheme work for untargeted attacks?\\n\\n3. Does the proposed scheme work for other types of graph learning tasks, such as graph classification and link prediction?\\n\\n4. The verification attacking method needs more details, and other attacks should be further evaluated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Part 1/1] Response to the Reviewer i7KZ\", \"comment\": \"Dear Reviewer i7k7\", \"we_sincerely_thank_you_for_taking_the_time_to_evaluate_our_work_and_have_adressed_your_concerns_as_follows\": \"> `Weakness 1`: More discussion on the baseline.\\n\\nWe appreciate your feedback recognizing the advantages of our approach and pointing out the missing discussion on certain baselines. We would like to clarify that, in Section 5.2 (Performance Comparison), we categorized RFA under statistical distribution-based methods and provided an explanation. To ensure a comprehensive analysis of why baselines may not perform well in the given scenario, we will provide further details or clarifications as needed to enhance your understanding of our work. \\n\\n> `Question 1`: More defense performance under various trigger types\\n\\nWe sincerely appreciate your insightful observations. In the revised manuscript, we have provided additional experimental reports in Appendix B. We have selected some of the results to display below:\\n\\n**GTA (Non-iid-louvain with a malicious proportion of 0.3):**\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 78.54, 27.96, 53.25 | 86.86, 11.70, 49.28 | 93.42, 24.86, 59.14 |\\n| FLTrust | 78.21, 43.01, 60.61 | 86.36, 36.05, 61.21 | 94.21, 59.26, 76.74 |\\n| FoolsGold | 79.56, 25.60, 52.58 | 89.15, 16.70, 52.87 | 94.68, 40.56, 67.62 |\\n| FLAME | 77.54, 67.13, 72.34 | 85.45, 40.51, 62.98 | 93.28, 50.49, 71.89 |\\n| Trim Median | 79.49, 33.06, 56.26 | 86.63, 14.58, 50.60 | 93.55, 27.39, 60.47 |\\n| Sageflow | 79.95, 29.49, 54.72 | 89.04, 16.70, 52.87 | 94.54, 54.69, 74.62 |\\n| **FedTGE** | 80.23, 75.23, **77.73** | 87.56, 55.34, **71.45** | 94.39, 72.37, **83.38** |\\n\\n**WS (Non-iid-louvain with a malicious proportion of 0.3):**\\n\\n| Methods | Cora (A, R, V) | Pubmed (A, R, V) | Physics (A, R, V) |\\n| :---------: | :-------------------------: | :-------------------------: | :-------------------------: |\\n| FedAvg | 77.31, 74.74, 76.03 | 85.34, 81.97, 83.66 | 93.82, 63.03, 78.42 |\\n| FLTrust | 76.29, 93.51, 84.90 | 86.43, 74.67, 80.55 | 93.87, 63.19, 78.53 |\\n| FoolsGold | 82.91, 84.72, 83.82 | 87.02, 78.28, 82.65 | 94.09, 68.29, 81.19 |\\n| FLAME | 82.25, 84.23, 82.24 | 86.26, 70.37, 78.32 | 93.37, 58.83, 76.10 |\\n| Trim Median | 77.31, 74.74, 76.03 | 85.13, 71.39, 78.26 | 92.95, 58.41, 75.68 |\\n| Sageflow | 81.81, 89.40, 85.61 | 86.10, 86.34, 86.22 | 94.00, 67.24, 80.62 |\\n| **FedTGE** | 81.76, 91.98, **86.87** | 86.23, 90.19, **88.21** | 94.45, 85.69, **90.07** |\\n\\nThe GTA algorithm injects triggers with distinct shapes, such as star or ring patterns, which exhibit high attack efficacy but suffer from low stealthiness. In contrast, the WS algorithm based on the Watts-Strogatz small-world model introduces triggers that typically possess realistic network characteristics resulting in higher stealthiness but at the cost of relatively lower attack effectiveness. Nevertheless, FedTGE is able to maintain strong defense performance even under these attack scenarios.\\n\\n\\n\\n---\"}", "{\"summary\": \"This paper addresses the challenges of backdoor attacks in Federated Graph Learning (FGL) by proposing an innovative defense method called FedTGE, which utilizes Topological Graph Energy. The method operates at both the client and server levels, injecting energy distribution knowledge into local models to differentiate between benign and malicious samples. It assigns low energy to benign samples while elevating the energy of constructed malicious substitutes, enabling the selection of trustworthy clients through clustering. At the server level, the energy elements uploaded by clients are treated as nodes in a global energy graph, facilitating energy propagation and enhancing the robustness of client selection. The experimental results validate the effectiveness of FedTGE under varying proportions of malicious clients and different data scenarios, demonstrating its capability to handle high data heterogeneity without requiring a validation dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The introduction of the FedTGE framework is innovative, employing energy-based modeling to defend against backdoor attacks in federated graph learning (FGL).\", \"The paper presents a comprehensive evaluation of the proposed method across various datasets and scenarios, demonstrating its effectiveness in both IID and non-IID settings.\", \"The manuscript is well-structured, presenting a clear delineation of the proposed methodology, experimental setup, and results.\"], \"weaknesses\": [\"The efficacy of the FedTGE method heavily relies on the accurate modeling of energy distributions. To enhance the robustness of energy estimations, the authors should consider incorporating visual analysis of energy distributions prior to clustering.\", \"The computational demands associated with energy graph construction and similarity propagation may hinder scalability. The authors should discuss about the computational overhead and the costs of federated transmission.\", \"The authors should investigate the defense performance across different trigger structures and patterns to better understand and enhance the system's resilience under varied attack scenarios.\", \"The method requires careful threshold selection for clustering energy elements, which could be subjective and impact performance. The authors should describe how to determine these thresholds.\"], \"questions\": \"Refer to the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"For Question 3, (linked to the authors' response for weakness 2), it shows the proposed algorithm has a better performance than Certified Defense (ccs 24), which is unexpected. To extend the proposed method into the graph classification scenario, what kind of modifications should be made? In addition, the implementation details should be provided for further verification.\"}", "{\"comment\": \"Thank you for your thoughtful consideration and willingness to reevaluate the score. We sincerely appreciate your recognition of our efforts and the constructive feedback that has helped us improve our work.\"}", "{\"title\": \"[Part 2/2] Response to the Reviewer 8A86\", \"comment\": \"> `Question 3`: How does TESP prevent the incorrect inclusion of benign clients?\\n\\nThank you for your detailed observation. We address your question from two perspectives: **(1)** **Benign clients are almost never excluded from the aggregation process.** The TESP component is designed to first calculate the energy distribution similarity between clients. A client is excluded from the aggregation process only if its similarity with all other clients falls below a specified threshold. Visualization results of the clustering process have be provided in Appendix D, where it can be clearly observed that no benign client, after the TEDC component adjusts energy, has a similarity distribution so low with all other clients that it would be excluded from aggregation. **(2)** **No benign client is assigned a very low aggregation weight, ensuring proper knowledge sharing.** The TESP clustering adjustment is based on the sum of the energy corresponding to each client, which is not directly related to similarity calculation. The smaller the sum of energy corresponding to a client, the larger the aggregation weight assigned to it. Therefore, the method ensures that benign clients are assigned appropriate aggregation weights.\\n\\n---\\n\\n> `Question 4`: Complexity Analysis\\n\\nThank you for this valuable suggestion. In the revised version of the manuscript, we have included a complete complexity analysis. The overall computational complexity of FedTGE can be formalized as:\\n\\n**TEDC**:\\n$$\\nO((3E \\\\times F + 2D \\\\times F + 2D + P_{BN}) \\\\times EN)\\n$$\\nwhere $P_{BN}$ represents the parameters of the Batch Normalization (BN) layer in the model, $E$ denotes the number of edges in the dataset, $D$ and $F$ stand for the number of nodes and features respectively, and $EN$ refers to the epoch count used for energy calibration. In non-dense graphs, $E$ can be considered proportional to $D$, $P_{BN}$ can be neglected. Therefore, the formula can be further simplified as:\\n$$\\nO(E \\\\times F \\\\times EN)\\n$$\\nThis indicates that the TEDC module has a linear relationship with the number of edges $E$ and is independent of the number of clients, enabling energy calibration and clustering for large-scale client numbers.\\n\\n**TESP**:\\n$$\\nO(E\\\\times L+ N^2+K\\\\times E + N\\\\times M)\\n$$\\nWe did not calculate the complexity of parameter aggregation because it is a process that exists in all FL systems. We only analyzed the complexity of the similarity propagation part. Here, $N$ represents the number of clients, $K$ and $L$ denote the number of propagation layers and the length of the energy distribution (i.e., the number of node), respectively. It is worth noting that the TESP module treats clients as nodes and the connections between clients as edges. Since $N$ and $E$ is usually a relatively small constant, the above formula can be further simplified as:\\n\\n$$\\nO(D+K)\\n$$\\n\\nThis indicates that TESP module has a linear relationship with the number of nodes $D$, which demonstrates its suitability for large-scale datasets. \\n\\n---\"}" ] }
5JXvgNCQUq
Regularized Optimal Transport for Single-Cell Temporal Trajectory Analysis
[ "Jie Peng", "Xuan Song", "Zhigang He", "Marinka Zitnik", "Manolis Kellis", "Yanyong Zhang", "Tianlong Chen" ]
The temporal relationship between different cellular states and lineages is only partially understood and has major significance for cell differentiation and cancer progression. However, two pain points persist and limit learning-based solutions: ($a$) lack of real datasets and standardized benchmark for early cell developments; ($b$) the complicated transcriptional data fail classic temporal analyses. We integrate $\texttt{Mouse-RGC}$, a large-scale mouse retinal ganglion cell dataset with annotations for $9$ time stages and $30,000$ gene expressions. Existing approaches show a limited generalization of our datasets. To tackle the modeling bottleneck, we then translate this fundamental biology problem into a machine learning formulation, $\textit{i.e.}$, $\textit{temporal trajectory analysis}$. An innovative regularized optimal transport algorithm, $\texttt{TAROT}$, is proposed to fill in the research gap, consisting of ($1$) customized masked autoencoder to extract high-quality cell representations; ($2$) cost function regularization through biology priors for distribution transports; ($3$) continuous temporal trajectory optimization based on discrete matched time stages. Extensive empirical investigations demonstrate that our framework produces superior cell lineages and pseudotime, compared to existing approaches on $\texttt{Mouse-RGC}$ and another two public benchmarks. Moreover, $\texttt{TAROT}$ is capable of identifying biologically meaningful gene sets along with the developmental trajectory, and its simulated gene knockout results echo the findings in physical wet lab validation.
[ "single-cell transcriptomics", "temporal trajectory analysis", "optimal transport" ]
Reject
https://openreview.net/pdf?id=5JXvgNCQUq
https://openreview.net/forum?id=5JXvgNCQUq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zIshTckXMG", "v9qO5QgCKn", "oQZpgxEey0", "7r1e8Bix5b", "3FYeYAx11K" ], "note_type": [ "official_review", "official_review", "official_review", "decision", "meta_review" ], "note_created": [ 1730531841364, 1730288704120, 1730598707833, 1737523982422, 1734595623064 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9423/Reviewer_z8w6" ], [ "ICLR.cc/2025/Conference/Submission9423/Reviewer_zy5U" ], [ "ICLR.cc/2025/Conference/Submission9423/Reviewer_66WC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9423/Area_Chair_9JtU" ] ], "structured_content_str": [ "{\"summary\": \"This paper suggests a novel framework for trajectory inference in single-cell datasets, $\\\\texttt{TAROT}$ which comprises three main steps; (a) obtaining cell representations via a masked autoencoder, (b) mapping cells via regularized optimal transport (infomed of biological priors), and (c) inferring a continuous trajectory. To showcase the framework\\u2019s performance the authors generated a large-scale integrated mouse retinal ganglion cell dataset, $\\\\texttt{Mouse-RGC}$. Evaluation over $\\\\texttt{Mouse-RGC}$ alongside two other publicly available single-cell datasets is provided.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"*Contribution*: The paper suggest a joint approach for the trajectory inference task, combining data representation, mapping between time points, and continuous trajectory inference.\", \"*Quality & clarity*: The paper is well written presenting high quality work--required background is provided followed by in depth model description, and diverse analysis providing both quantitative comparison as well as biological validations.\", \"*Significance*: The suggested framework addresses a challenging task in the single cell field\\u2013inferring the continuous dynamics\\u2013stemming from the destructive nature of experimental protocols.\"], \"weaknesses\": [\"*Limited novelty*: $\\\\texttt{TAROT}$ is presented as a novel framework however all three components have been previously suggested for single cell data analysis. Specifically, both WOT (Schiebinger et al. 2019) and moscot (Klein et al., 2023) utilize OT for single cell trajectory analysis, rely on lower dimensional representations for the mapping, and allow incorporation of biological priors via the source and marginal distributions as well as un-balanced mapping. Of note, while the authors relate to these as \\\"preliminary\\\" works they are well accepted and used by the community (e.g. WOT has ~770 citations and moscot's github has ~100 stars).\", \"*Reliance on cell clustering*: As remarked by the authors (in Appendix C.3), cell clustering is a critical step in data preprocessing for $\\\\texttt{TAROT}$ application. This can be a limitation when reliable clusters are unavailable and generating them may require \\\"experience of biology scientists\\\" (as used by the authors for the $\\\\texttt{Mouse-RGC}$ data, Sec. 3.2). The ablations show that clustering quality as well is simply removing the cost results in poor performance, it will be valuable to relate to this limitation.\", \"*Performance assessment*: Qualitative results rely on metrics which are designed to validate the developmental and functional priors induced by $\\\\texttt{TAROT}$. Though gene patterns assessment rely on genes not included in the prior, $\\\\texttt{TAROT}$ has a clear advantage in these tasks, as it is designed to capture these patterns. Next, the biological contribution is presented only for $\\\\texttt{TAROT}$ (Figure 7) and knockout analysis, whereas it may be that other methods capture the same trends, or compared to least performing method (Figure 8).\", \"*Data availability*: The curation of $\\\\texttt{Mouse-RGC}$ is presented by the authors as a major contribution however the data is not made available.\"], \"questions\": \"Questions are in light of the presented weaknesses; Can the authors:\\n\\n(i) clarify of the exact novelty of the framework; \\\\\\n(ii) present further/more proper evaluation to justify (i); \\\\\\n(iii) relate to the dependency on cell clustering; \\\\\\n(iv) provide details on the setting used for the baseline methods (choices of parameters, cost construction etc.); \\\\\\n(v) make the $\\\\texttt{Mouse-RGC}$ dataset available for the community.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce Tarot, a regularized optimal transport model for learning cellular lineages in developmental data. Tarot utilises time-resolved RNA-seq gene expression data to learn how cellular lineages unroll in time. More specifically, the model relies on unpaired snapshots of single cells collected in time, which are matched with each other using optimal transport, since RNA-seq does not allow sequencing of the same cell multiple times across samples. The first component of Tarot is a masked autoencoder, used to project normalised gene expression to a lower dimensional embedding space. The masked autoencoder is trained to reconstruct randomly masked regions of the input gene space with a mean squared error (MSE) loss. Once trained, the bottleneck representations are used to train a regularised optimal transport algorithm to match clusters of cells across times and reconstruct developmental lineages. The OT approach is regularised to prevent backward mapping in time and ensure the preservation of the ground-truth monotonic behaviour of key lineage genes. Finally, continuous trajectories are learnt as B-splines on the\\u00a0cluster representations. The authors demonstrate improvement over three baselines based on lineage reconstruction and gene pattern preservation. They also validate the model biologically on its ability to recapitulate gene expression dynamics across developmental time and the effect of perturbation on the cellular trajectory. Together with a technical contribution, the authors release an integrated dataset of 30k mouse neuronal cells.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I think the paper has a clear biological reasoning at its foundations, which I appreciate and tends to be suboptimal in the context of machine learning conferences. I also value the effort of collecting, curating and integrating a whole new dataset, which I am sure will create value in the community. Structure-wise, I liked the schematic way both contributions and metrics were presented, it made the paper easier to follow. I also enjoyed the visuals quite a lot, you can tell that the authors spent time on them and I found them very convincing. The reading was enjoyable. Finally, I really like the idea of informing OT with proper biological choices. I think reasoning towards more biologically informed optimal transport rather than blindly applying Euclidean costs is important for future developments in the field.\", \"weaknesses\": [\"Major\", \"Although I believe the authors when they claim good representation performance of their autoencoder, I feel claiming its \\u201csuperiority\\u201d is rather unsupported. Probably it is a mischoice in the terminology, but there is no proper benchmark demonstrating in what way the masked autoencoder is better than traditional approaches (e.g. scVI [1]) for the proposed task. Also, masked autoencoders for single-cell have now been proposed in multiple settings, which makes their use a nice asset, but a source of novelty. I think the work would benefit from proof of the reported superiority in the representation, or, conversely, the authors may want to tone the representation aspect a little bit in the presentation.\", \"In my opinion, referring to the task of \\u201crecasting learning trajectories\\u201d as a machine learning problem overlooks the fact that ML for trajectory inference is now a standard in the computational biology field. I would rather phrase the contribution as an additional method in this direction. This shortcoming is reflected in the Related Works section, where only traditional methods (not based on deep learning) are discussed. I think such a section would benefit from an update including how machine learning has already been applied to OT for single-cell trajectory inference.\", \"I wonder why Mouse-IEP was not included in the benchmarks comparing Tarot with existing models.\", \"Though I generally value attention to biology, I believe the biological details on page 4 may be shortened a little bit (and integrated with details in the appendix) to dedicate more time to the methods part, as some aspects (like the masked autoencoder part) felt a bit rushed.\", \"In my opinion, one of the major drawbacks of the method is the operation at the cluster level. I see the benefit of such an approach on the dataset at hand since Figure 1 demonstrates low variability within clusters. Nevertheless, some datasets display larger heterogeneity per biological cluster [4], which would make the representation choice suboptimal. I think acceptance at ICLR would require a more comprehensive dataset choice, including more challenging settings (e.g. the MEF dataset in [5]). The authors claim that individual cellular representations are also an option, but no evidence was provided in these regards, so I cannot validate the claim properly.\", \"Though I see the value of the way the authors perform continuous trajectory learning, other methods have been released based on dynamic optimal transport that learns continuous vector fields (and so interpolations). An example is the standard OT conditional flow matching [6]. The lack of a comparison to such methods should be discussed. Comparison with at least a couple of them would be very beneficial for the contribution.\", \"I wonder if limiting the evaluation of GPT-G and GPT-L to monotonically varying genes is comprehensive enough. In other more complex datasets, this would potentially impair modelling more complex regulatory processes. It would be useful for the authors to elaborate more on this.\", \"In my opinion, the paper lacks a little bit of elaboration on some aspects of the experimental process. How did you run the several methods to make them compliant with the setting you propose? Did you simply run them on averaged cluster representations? Were other adjustments needed?\", \"In my opinion, the way the GSEA analysis is presented is a bit unclear and the biological value of the finding is not discussed. It would be beneficial to at least disclose the GO terms in the main text and explain why they make sense in the considered system.\", \"The change in trajectory between the perturbed and unperturbed states in Figure 9 is not discussed in terms of biological relevance. Are the changes in cluster transitions biologically relevant?\", \"I think the last result on real-world experiments would have benefitted from some visualisation or table.\", \"Minor\", \"The authors refer to cells with the term \\u201cgene expressions\\u201d. In my opinion, it would be more appropriate and readable to address them as \\u201csequenced cells\\u201d or \\u201cgene expression profiles of n cells\\u201d. The way it is phrased now may be confused with the concept of \\u201cnumber of genes\\u201d.\", \"\\u201cSpecifically, the number of nearest neighbors was chosen to be 50, according to the rich experiences of biology scientists\\u201d. I would personally remove this sentence unless supported by a solid citation. Packages like scanpy use a default lower number of neighbours and that seems to work properly. It is fine if more neighbours worked better for the datasets in this study, but I would phrase this sentence in this direction.\", \"I would make sure indexes are not bolded in the notations (e.g. line 245).\", \"As of know, I unfortunately believe that the reasons to reject the current version of the paper outweigh the reasons to accept it. But I am happy to discuss the author\\u2019s views on my comments.\", \"[1] Lopez, Romain, et al. \\\"Deep generative modeling for single-cell transcriptomics.\\\"\\u00a0Nature methods\\u00a015.12 (2018): 1053-1058.\", \"[2] Fang, Zhaoyu, Ruiqing Zheng, and Min Li. \\\"scMAE: a masked autoencoder for single-cell RNA-seq clustering.\\\"\\u00a0Bioinformatics\\u00a040.1 (2024): btae020.\", \"[3] Kim, Jaesik, et al. \\\"Single-cell Masked Autoencoder: An Accurate and Interpretable Automated Immunophenotyper.\\\"\\u00a0NeurIPS 2023 AI for Science Workshop. 2024.\", \"[4] He, Zhisong, et al. \\\"An integrated transcriptomic cell atlas of human neural organoids.\\\"\\u00a0bioRxiv\\u00a0(2023): 2023-10.\", \"[5] Schiebinger, Geoffrey, et al. \\\"Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming.\\\"\\u00a0Cell\\u00a0176.4 (2019): 928-943.\", \"[6] Tong, Alexander, et al. \\\"Improving and generalizing flow-based generative models with minibatch optimal transport.\\\"\\u00a0arXiv preprint arXiv:2302.00482\\u00a0(2023).\"], \"questions\": [\"I would like to ask why the $D^{dev}$ loss is needed? How do you compute couplings? Don\\u2019t you always map clusters from t to those in t+1 like in the standard OT matchings? Why can clusters in t+1 be ancestors of t?\", \"In my view the constraint to monotonically varying genes is a bit restrictive and system specific. Is it possible to integrate more complex developmental patterns for other genes. Would you mind elaborating on how you would do it?\", \"I did not understand how the B-splines method accommodates mutations in the trajectory. Is it related to the perturbation experiment? Could one model mutations with standard discrete optimal transport as well?\", \"How did you compare with Moscot on the TOC metric? As far as I know, it is not a method for learning pseudotime.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new method, TAROT, to address the challenge of temporal trajectory analysis in scRNA-seq data. It introduces a new dataset, Mouse-RGC, consisting of 30K mouse retinal ganglion cells across 9 developmental stages. The core of the method is regularized optimal transport (OT) which integrates biological priors to map cell states across time points. TAROT utilizes a masked autoencoder to learn cell embedding representations, and generates continuous trajectories using B-Splines. Experiments demonstrate TAROT\\u2019s performance compared to existing methods on multiple benchmarks, and the authors also simulate gene knockout experiments to validate the biological relevance of the model\\u2019s findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality** : The paper proposes an integrated temporal trajectory analysis method that combines different components\\u2014cell representation learning, trajectory inference, and continuous trajectory generation. And It also introduce a new dataset can be used in other benchmarking work.\\n\\n**Quality**: The paper includes a set of biological analyses, including gene knockout simulations and pathway alignments, which may provide some biological meaningful insights. And the ablation studies were also conducted.\\n\\n**Clarity** : The paper is well-organized and easy to follow.\\n\\n**Significance**: Temporal trajectory analysis is important in single-cell biology.\", \"weaknesses\": [\"Despite the strengths discussed above, there still exists some concerns:\", \"**Methodological novelty** : Although the paper presents a unified framework, the components of this framework\\u2014such as cell representation learning via masked autoencoders, regularized OT for trajectory inference, and the use of splines for continuous trajectory generation\\u2014are not novel. Each of these techniques has been explored in previous works (e.g., WaddingOT [1], Moscot [2], Slingshot, Monocle). The combination of these existing methods may come across as a straightforward integration. The paper may need to further address how this combination leads to new insights or capabilities that were not achievable with existing techniques, with more systematic benchmarkings.\", \"**Limited to Cluster-Level Resolution**: This paper produces continuous trajectories at the cluster level rather than achieving single-cell resolution. Recent advancements, particularly in dynamic optimal transport, can directly generate continuous single-cell trajectories, and simulation-free approaches such as conditional flow matching (see [3,4,5]) can scale to high-dimensional gene spaces. The paper may need to adequately compare TAROT against these more recent methods and explain the advantages TAROT provides over approaches that can inherently handle single-cell granularity.\", \"**Generalization**: While the authors demonstrate TAROT\\u2019s effectiveness on Mouse-RGC, the paper does not provide sufficient discussion on the generalizability of the approach. For example on cross-dataset beyond Mouse-RGC, whether the framework is broadly applicable or tailored to a specific type of data. In the abstract , it is stated that \\\"....Existing approaches show a limited generalization of our datasets*\\\". The readers may naturally question what is the performance of TAROT on other datasets where existing methods demonstrated good performance.\", \"**Cell Division and Death:** The current framework does not consider cell division or death (e.g., Moscot), which is crucial in biological processes that affect trajectory inference.\", \"**Data Availability:** The data availability statements seemed not found.\", \"Based on the discussions above, I suggest the work could be further revised substantially, and I would be very happy to increase the score if the concerns are adequately addressed by the authors.\", \"[1] Schiebinger, Geoffrey, et al. \\\"Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming.\\\" [2] Klein, Dominik, et al. \\\"Mapping cells through time and space with moscot.\\\" [3] Tong, Alexander, et al. \\\"Simulation-free schr\\u00f6dinger bridges via score and flow matching.\\\" [4] Lipman, Yaron, et al. \\\"Flow matching for generative modeling.\\\" [5] Tong, Alexander, et al. \\\"Improving and generalizing flow-based generative models with minibatch optimal transport.\\\"\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The reviewers had mixed leaning towards reject reviews of this paper. The authors did not engage in a discussion to challenge this view. It is therefore hard to change the initial response.\", \"additional_comments_on_reviewer_discussion\": \"None,\"}" ] }
5JOxazmj8b
From Link Prediction to Forecasting: Information Loss in Batch-based Temporal Graph Learning
[ "Moritz Lampert", "Christopher Blöcker", "Ingo Scholtes" ]
Dynamic link prediction is an important problem considered by many recent works proposing various approaches for learning temporal edge patterns. To assess their efficacy, models are evaluated on publicly available benchmark datasets involving continuous-time and discrete-time temporal graphs. However, as we show in this work, the suitability of common batch-oriented evaluation depends on the datasets’ characteristics, which can cause multiple issues: For continuous-time temporal graphs, fixed-size batches create time windows with different durations, resulting in an inconsistent dynamic link prediction task. For discrete-time temporal graphs, the sequence of batches can additionally introduce temporal dependencies that are not present in the data. In this work, we empirically show that this common evaluation approach leads to skewed model performance and hinders the fair comparison of methods. We mitigate this problem by reformulating dynamic link prediction as a link forecasting task that better accounts for temporal information present in the data. We provide implementations of our new evaluation method for commonly used graph learning frameworks.
[ "Graph Neural Network", "GNN", "Temporal Graph", "Dynamic Link Prediction", "Dynamic Graph", "Temporal Graph Learning", "Dynamic Graph Learning", "Temporal Graph Neural Network", "TGNN", "DyGNN", "Dynamic Graph Neural Network" ]
Reject
https://openreview.net/pdf?id=5JOxazmj8b
https://openreview.net/forum?id=5JOxazmj8b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zDDVSZsHCh", "z8pkCekt2k", "t8l9DCg4YD", "t7Hr8m0qrJ", "qOctoIcryq", "q94KNji7cl", "oiAN58Vl2M", "l6LXKXvGlt", "jYHp8G0d45", "aI3VzUqG79", "XEB2Kau4HB", "MHLF2MPeRl", "K5oUlOj20P", "JXMZGpdSf2", "JHsfFCZe9a", "J2UmifdrOZ", "DD0s3EzEtb", "BW46GECejP", "BMUGHJyElV", "8wcmPWDKf6", "2JBA8YxtUF", "0DoyaOsFPj" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732204538499, 1730496412861, 1733137946383, 1730642638905, 1734702226171, 1732372202968, 1733301523207, 1732203945317, 1732204184045, 1730617050125, 1733137890738, 1730604686299, 1737524037828, 1732204820200, 1732759147568, 1733205434941, 1732204810462, 1732619244814, 1732818931330, 1732204245689, 1732677732118, 1732818664752 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_sDdG" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_m8vp" ], [ "ICLR.cc/2025/Conference/Submission10271/Area_Chair_rK1f" ], [ "ICLR.cc/2025/Conference/Submission10271/Area_Chair_rK1f" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_P4Ga" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_Bi4c" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_m8vp" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_sDdG" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_P4Ga" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_sDdG" ], [ "ICLR.cc/2025/Conference/Submission10271/Authors" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_Bi4c" ], [ "ICLR.cc/2025/Conference/Submission10271/Reviewer_sDdG" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate your evaluation of our work and are pleased that you consider the clarity of our work \\\"commendable\\\".\", \"we_provide_clarifications_to_your_comments_and_questions_below\": \"\", \"commenting_on_your_first_point\": \"Thank you for pointing this out, we are happy to clarify: The current formulation of dynamic link prediction allows tuning the batch size as a hidden hyperparameter. This may be for technical reasons, e.g. due to limited GPU memory, or for performance reasons where tuning the batch size to a given dataset and model improves model performance. However, if the batch size is tuned per model, this results most likely in different batch sizes for each model. Consequently, they consider potentially very different tasks, depending on how different the chosen batch sizes are. To address this, we propose to adopt \\\"batching\\\" in terms of fixed-size time windows, and we argue that those time windows should be chosen based on the specific dataset. Of course, this may be interpreted as modifying the existing dynamic link prediction task, however, by choosing a different name inspired by the time series analysis literature, we want to make the distinction clear: no fixed-size time windows vs. fixed-size time windows. \\nWe added further clarifications in the main text of the uploaded revision to emphasize that our evaluation method is designed to **replace** the batch-based evaluation.\", \"regarding_two_and_three\": \"It appears there was a slight misunderstanding about our experimental details.\\nWe trained all models using only the batch-based approach.\\nAfterward, we evaluated all models using both the batch-based and our proposed time-window-based approach and reported the performance differences.\\nWe improved the experimental details in the uploaded revision for more clarity.\\nSince we did not change the training procedure that we took over from Yu et al. [1] and reused the best-performing model configurations that were found in their hyperparameter tuning, we argue that no additional hyperparameter tuning is necessary.\\n### Responses\\n1. We mean the second thing that you mention. As long as the batch size is kept the same for all models, no unfairness arises. However, the batching might still impose an artificial ordering on temporal edges with the same timestamp (information loss) or lead to parallel processing of edges with different timestamps and treating them as happening simultaneously (information leakage). An unfairness issue arises when the batch size is treated as an implicit hyperparameter that is tuned on a per-model basis. Then, different models may end up with different batch sizes, which changes the characteristics of the prediction problem, making it easier or harder because more or less information of the original edges' timestamps is retained. Consequently, models that use different batch sizes should not be directly compared because they address a different prediction task.\\n2. It depends on how we think about the prediction task. A long time horizon essentially coarse-grains the links' temporal resolution, which is like saying \\\"all these links appear during the same time window\\\". However, we explicitly incorporate this in the problem formulation, making this a deliberate choice based on the problem domain. In Appendix C, we suggest plausible time horizons for the considered datasets. We argue that the time horizons should be defined by the characteristics of the dataset or the required granularity for the considered problem. For example, it may not be required to predict whether a customer will purchase a certain product within the next second; making such a prediction for the next day or week may be sufficient.\\n3. Both the varying time gaps and the information leakage are significant. For continuous-time datasets with inhomogeneously distributed temporal activity (such as Enron or UCI), we observe substantial performance changes for all models when comparing both evaluation approaches because of the varying time gaps. For discrete-time datasets, we observe a substantial drop in performance using the time-window-based approach compared to the batch-based approach for memory-based models because of the identified information leakage.\\n4. This depends on the specific interaction patterns. However, generally, a larger $h$ will produce more links per time window, thereby it reduces the wallclock time of training due to better parallelization. Note that (as pointed out above) we, for now, only consider the evaluation and not the training. In this sense, the \\\"training time\\\" would only be impacted if you consider the evaluation on a validation set, e.g. for early stopping, as part of the training. \\n5. As pointed out above, using a DLP pipeline and then evaluating both DLP/LF is how we conducted our experiments. Using the time-window-based approach for training is left for future work.\\n### References\\n[1] Le Yu et al. Towards Better Dynamic Graph Learning: New Architecture and Unified Library. In NeurIPS, 2023.\"}", "{\"summary\": \"This work tackles the issue of information loss and leakage in the batch-oriented evaluation protocol for temporal graph learning. It first validates the existence of such loss and leakage through experiments that compute the NMI between batches and timestamps. To address this, the authors propose a new evaluation protocol for link forecasting and re-evaluate numerous existing methods within this framework. The provided code in the supplementary material appears comprehensive and sufficient.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents a study that provides a novel perspective on the internal issues with the batch-oriented evaluation protocol. The overall presentation is clear, and it includes adequate background information. The experiments related to the new evaluation protocol are relatively comprehensive, with re-evaluation of numerous existing methods. However, there are some missing aspects (as noted in the identified weaknesses) that should be addressed.\", \"weaknesses\": \"W1. Figure 2 and Figure 3 offer trivial or intuitive results, which seem to be redundant in the main text. I would suggest to just put part of them in the main text and the remaining in the appendix. Otherwise, these two figures would attract too much attention from the readers and make them lost before reading your point (Figure 4).\\n\\nW2. The NMI may not be a trivial concept for common readers, therefore it would be helpful to include some technical details of it in the appendix and link to it at line 263. The current presentation of this paragraph is quite confusing as I cannot understand what lines 266-268 mean (how the NMI was computed). Maybe some formulation would help improve the clarification here.\\n\\nW3. While the issue with the batch-oriented evaluation protocol is effectively identified using NMI, the paper does not adequately explain how this impacts previous evaluations and benchmarks that use this protocol. For instance, could you provide experimental validation to demonstrate the biases or flaws in past evaluations? The experiments in the current manuscript are associated with the new link forecasting protocol. Still, I am confused about how to experimentally validate that this protocol is better than the previous ones. \\n\\nW4. One advantage of the batch-oriented evaluation is the efficiency, but there seems to be no comparison or comment regarding the training/evaluation efficiency of the link forecasting protocol. I am aware of line 393, Computational cost, but I think such a not in-depth analysis is not enough. Some experimental results could be included.\\n\\nW5. This is a relatively minor point, as is included in the limitation. For the frameworks that consider the temporal link prediction problem as a ranking task, is it possible to have more discussions about the pros/cons of time-window-based approaches and ranking approaches?\", \"minor\": \"1. line 143, should be {$t_{b\\\\cdot i}\\\\, ..., t_{b\\\\cdot (i+1)-1}$}? Also, line 244 seems to have the same mistake.\\n2. line 146, 147. if there is no constraint on (u,v), then it's possible that (u,v) is not in the B+ or B- batch. Is something like $(u, v) \\\\in B^+ \\\\cup B^-$ missing? I think similar issue exists for you link forecasting definition, line 379.\\n3. line 346. \\\"negative edges that do not occur in time window [i \\u00b7 h,(i + 1) \\u00b7 h)\\\". I feel it ambiguous. is those negative edges not in this time window, or in this time window but do not occur (interact)? Should be the latter?\\n4. Actually it could be an independent section for related work. I would recommend commenting on how existing TGNN benchmarks flow to validate that the widely-used batch-oriented training/evaluation does have issues that this work is trying to fix.\\n5. The code implementation largely builds upon an existing framework, DyGLib, and it would be helpful to include comments or documentation clarifying where the extensions and modifications occur at the code level (code snippets) in the appendix.\", \"questions\": \"I currently lean towards a rejection of the manuscript but am very open to further discussion and increasing the based on how well my concerns get addressed. I encourage the authors to refer to the identified weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### W4\\n\\nThe following table shows the average runtime of both evaluation approaches (batch-based and window-based) across five runs in seconds. Note that some cells currently contain NaN-values and not all datasets are included because the experiments for these configurations are not yet finished. For most continuous-time datasets and models, the table shows that both approaches have a comparable runtime, supporting our claims that our approach is comparable in runtime efficiency when the windows contain, on average, a similar number of edges compared to a batch. Note that due to the inhomogeneously distributed temporal activity of the UCI dataset and the chosen time window duration (to get an average of 200 edges per window), the time windows in the test set generally contain less than 200 edges resulting in a longer runtime for the window-based approach than the batch-based approach. Our approach is faster for discrete-time datasets since, instead of using a fixed number of 200 edges per batch, we use each snapshot as a whole. The snapshots typically contain more than 200 edges, leading to more edges being processed in parallel and, thus, a faster evaluation. \\n\\n| Dataset | Approach | JODIE | DyRep | TGN | TGAT | CAWN | TCL | GraphMixer | DyGFormer |\\n|:---|:---|:----|:-----|:----|:----|:-----|:----|:----|:-----|\\n| Enron | Batch | 3.27 \\u00b1 0.14 | 4.66 \\u00b1 0.42 | 4.69 \\u00b1 0.51 | 48.19 \\u00b1 0.58 | 97.30 \\u00b1 0.70 | 10.32 \\u00b1 0.10 | 17.11 \\u00b1 0.09 | 66.42 \\u00b1 0.36 |\\n| Enron | Window | 3.97 \\u00b1 0.10 | 5.44 \\u00b1 0.21 | 5.44 \\u00b1 0.19 | 46.00 \\u00b1 0.97 | 86.02 \\u00b1 0.52 | 11.43 \\u00b1 0.11 | 17.12 \\u00b1 0.27 | 58.17 \\u00b1 0.66 |\\n| UCI | Batch | 2.71 \\u00b1 0.23 | 3.48 \\u00b1 0.28 | 3.50 \\u00b1 0.33 | 28.63 \\u00b1 0.80 | 87.15 \\u00b1 0.57 | 5.73 \\u00b1 0.11 | 10.23 \\u00b1 0.33 | 32.93 \\u00b1 1.05 |\\n| UCI | Window | 8.40 \\u00b1 0.24 | 10.07 \\u00b1 0.12 | 10.37 \\u00b1 0.17 | 38.10 \\u00b1 0.92 | 95.46 \\u00b1 0.62 | 15.63 \\u00b1 0.19 | 16.78 \\u00b1 0.38 | 37.86 \\u00b1 0.97 |\\n| MOOC | Batch | 74.51 \\u00b1 0.83 | 82.40 \\u00b1 1.66 | 88.52 \\u00b1 2.52 | 293.57 \\u00b1 2.60 | 666.43 \\u00b1 38.50 | 136.00 \\u00b1 0.25 | 140.94 \\u00b1 0.48 | 340.66 \\u00b1 9.61 |\\n| MOOC | Window | 72.15 \\u00b1 2.78 | 79.16 \\u00b1 4.34 | 83.47 \\u00b1 0.44 | 278.63 \\u00b1 5.28 | 613.75 \\u00b1 37.14 | 117.93 \\u00b1 0.48 | 127.60 \\u00b1 0.61 | 336.95 \\u00b1 0.89 |\\n| Wiki. | Batch | 16.98 \\u00b1 0.87 | 19.73 \\u00b1 0.57 | 19.57 \\u00b1 0.87 | 75.19 \\u00b1 1.24 | 140.82 \\u00b1 1.33 | 18.80 \\u00b1 0.32 | 30.23 \\u00b1 0.41 | 85.28 \\u00b1 1.37 |\\n| Wiki. | Window | 19.96 \\u00b1 0.50 | 22.46 \\u00b1 0.71 | 23.46 \\u00b1 1.00 | 78.16 \\u00b1 0.85 | 139.29 \\u00b1 0.64 | 21.43 \\u00b1 0.31 | 32.27 \\u00b1 0.73 | 85.56 \\u00b1 0.76 |\\n| LastFM | Batch | 380.99 \\u00b1 12.12 | 370.06 \\u00b1 0.17 | 383.59 \\u00b1 23.13 | 1162.63 \\u00b1 7.73 | 3689.96 \\u00b1 81.35 | nan | nan | nan |\\n| LastFM | Window | 294.37 \\u00b1 0.60 | 314.94 \\u00b1 0.56 | 315.01 \\u00b1 0.46 | 1045.44 \\u00b1 14.71 | 3381.94 \\u00b1 16.44 | nan | nan | nan |\\n| Myket | Batch | 481.59 \\u00b1 7.94 | nan | 512.37 \\u00b1 20.87 | 962.83 \\u00b1 86.92 | 973.83 \\u00b1 4.48 | 626.97 \\u00b1 7.25 | 640.49 \\u00b1 7.10 | 993.96 \\u00b1 43.32 |\\n| Myket | Window | 544.35 \\u00b1 14.55 | 559.54 \\u00b1 11.55 | 553.53 \\u00b1 5.20 | 880.56 \\u00b1 75.42 | 1067.29 \\u00b1 6.12 | 618.84 \\u00b1 6.33 | 628.36 \\u00b1 6.55 | 948.99 \\u00b1 5.81 |\\n| US L. | Batch | 2.07 \\u00b1 0.10 | 2.57 \\u00b1 0.13 | 2.47 \\u00b1 0.15 | 23.66 \\u00b1 1.09 | 49.10 \\u00b1 0.84 | 6.88 \\u00b1 0.06 | 8.26 \\u00b1 0.13 | 34.72 \\u00b1 0.91 |\\n| US L. | Window | 1.28 \\u00b1 0.13 | 1.64 \\u00b1 0.11 | 1.56 \\u00b1 0.10 | 13.20 \\u00b1 0.16 | 33.15 \\u00b1 0.16 | 3.26 \\u00b1 0.12 | 4.69 \\u00b1 0.11 | 24.21 \\u00b1 0.54 |\\n| UN Tr. | Batch | 39.80 \\u00b1 0.28 | 46.56 \\u00b1 0.55 | 44.54 \\u00b1 0.15 | 769.15 \\u00b1 20.48 | 598.23 \\u00b1 0.26 | 136.72 \\u00b1 0.45 | 262.35 \\u00b1 0.59 | 361.83 \\u00b1 2.14 |\\n| UN Tr. | Window | 2.28 \\u00b1 0.11 | 4.17 \\u00b1 0.12 | 4.41 \\u00b1 0.09 | nan | nan | 24.08 \\u00b1 0.22 | nan | nan |\\n| Can. P. | Batch | 2.65 \\u00b1 0.14 | 4.92 \\u00b1 0.27 | 4.94 \\u00b1 0.32 | 108.18 \\u00b1 0.41 | 191.27 \\u00b1 1.12 | 11.12 \\u00b1 0.16 | 29.76 \\u00b1 0.38 | 85.84 \\u00b1 0.83 |\\n| Can. P. | Window | 0.56 \\u00b1 0.12 | 2.30 \\u00b1 0.15 | 2.27 \\u00b1 0.13 | 75.20 \\u00b1 0.51 | nan | 4.68 \\u00b1 0.18 | nan | nan |\\n\\nWe will include the runtime of all datasets and models in the camera-ready version.\"}", "{\"summary\": \"This paper highlights an overlooked issue in the evaluation of dynamic link prediction task: fixed batch-size evaluation alters the task properties, as for continuous-time temporal graphs, leading to inconsistent duration evaluations across batches; for discrete-time temporal graphs, leading to possible data leakage due to additional introduced temporal dependencies\\n\\nTo explore this issue in depth, the paper first defines a quantitative metric, NMI, to measure information loss, and conducts extensive empirical analysis to demonstrate how fixed batch sizes distort the task setting. It then formulates a fairer setting, *link forecasting*, enabling consistent time durations for each evaluation batch. Finally, the authors reproduce experiments on existing methods within this reformulated task to reveal their true performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"*S1* This paper addresses an important yet overlooked issue in temporal link prediction task, i.e., fixed batch size evaluation can distort the task itself by losing or introducing extra information.\\n\\n*S2* The authors provide extensive data illustrations and quantitative results to facilitate understanding, demonstrating that each dataset has a distinct interaction distribution and how fixed batch-size evaluation can alter the task characteristics.\\n\\n*S3* This paper formulates a new task setting, *link forecasting*, and offers implementation and reproduction of existing methods to provide valuable insights.\", \"weaknesses\": \"I appreciate the issue raised in this paper and the extensive empirical analysis that clarify the motivation behind the study. However, for the experiments on existing methods within the formulated link forecasting task, I think further discussion is required, e.g., **the reasons for performance changes across diverse settings can be addressed.**\\n\\n The authors explain why memory-based methods tend to experience performance degradation on discrete-time graphs, which is appreciated. However, it would be beneficial to discuss why other methods might improve in this setting. Additionally, the performance trends for continuous-time graphs appear mixed, potentially due to specific dataset characteristics. I think a more in-depth discussion on these points would enhance the paper.\", \"questions\": \"Please refer to the \\\"Weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper addresses the limitations of existing evaluation strategies for dynamic link prediction (DLP) tasks in temporal graphs, proposing a new task setting called link forecasting (LF). The authors highlight how fixed batch-size evaluation distorts task properties and can lead to issues like temporal dependencies or information leakage. They introduce a novel time-window-based evaluation approach, backed by extensive empirical results, to provide more accurate performance assessments of existing models.\\n\\nWhile the proposed model shows promising results, there are several weaknesses that still need to be addressed:\\n\\n1. The batch-oriented evaluation problem in continuous-time graphs only affects memory-based methods. Recent sequence-based models (e.g., GraphMixer, DyGFormer) ensure that each node is assigned all its historical neighbors immediately before the prediction time, thereby avoiding the issues described.\\n2. The proposed LF setting doesn\\u2019t fully resolve the problem, as it still faces information leakage when the edges in a time window exceed GPU memory. While the authors suggest that discarding information within each time window can mitigate information leakage, this can also be achieved when using batch-based evaluation.\\n3. The time-window approach introduces an unfixed number of edges per batch, which could impact training stability and convergence, but this is not discussed in the paper.\\n\\nBased on these weaknesses, we recommend rejecting this paper. We hope this feedback helps the authors improve their paper.\", \"additional_comments_on_reviewer_discussion\": \"In their rebuttal, the authors made several improvements, including clarifications and updates to the presentation, which help reviewers better understand the contributions of the paper. However, the reviewers\\u2019 concerns regarding the novelty of the paper and the effectiveness of the proposed method, remain unresolved. As a result, I recommend rejection based on the reviewers\\u2019 feedback.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThe authors have uploaded their rebuttal. Please take this opportunity to discuss any concerns you may have with the authors.\\n\\nAC\"}", "{\"comment\": \"We thank all reviewers for the constructive discussions and their helpful suggestions. We are happy to see that you think that the \\\"overall presentation is clear\\\" with \\\"clarity [that] is commendable\\\" and unanimously agree that the soundness of our paper is good. We further appreciate that the reviewers agree that our paper \\\"addresses an important yet overlooked issue\\\" and contains \\\"a great amount of experiments to illustrate the limitations of existing techniques as well as the effectiveness of the proposed method\\\".\\n\\nFurthermore, we are glad that we were able to **clear up most concerns and, especially, all concerns of the reviewers m8vp, Bi4c, and sDdG**. Specifically, we improved our explanations for the performance changes due to the comments made by reviewer m8vp. Thanks to suggestions made by reviewer sDdG, we were able to provide additional empirical evidence that support our explanations of the performance changes as well as our claims about the prediction task being skewed and unfair. Thanks to reviewer sDdG, a runtime evaluation showing comparable results for our method and the batch-based approach supporting our claims about computational cost in our manuscript will be added in the camera-ready version. We were also able to correct typos and improve the general formulations in our manuscript to avoid potential misunderstandings thanks to the comments of the reviewers. Following the suggestion made by reviewers P4Ga and sDdG, we added an additional appendix that explains the NMI measure in more detail and agree with Reviewer sDdG who thinks \\\"it is pretty clear now\\\". However, we are happy to follow reviewer P4Ga's suggestion and will provide examples that use our notations that will help with the understanding in the camera-ready version.\\n\\nNext, we provide further clarifications about the unresolved concerns of reviewer P4Ga about the time horizon $h$:\\n\\nWe agree that the horizon $h$ is a parameter that can be changed. We further agree that there might not be a distinct value that makes sense for all forecasting tasks (e.g. forecasting links on Reddit, as mentioned by reviewer P4Ga). However, in contrast to the batch size - a **hyperparameter that is typically tuned for best performance** - the **horizon $h$ is a parameter that defines the task and cannot be tuned**.\\n\\nAdditionally, while it might make sense to calculate a deterministic unique value for some tasks based on the dataset's characteristics as reviewer P4ga proposes, there are also many cases where the **time horizon of the task is very different from the optimal time horizon based on the datasets characteristics**. A temporal network of train connections, for example, might have an optimal horizon $h=24h$ since most trains follow a daily schedule. Instead of learning the fixed schedule, the task for the forecasting model could be to learn unforeseen deviations from the schedule to react accordingly, e.g. by rerouting trains. As such, the forecasting horizon $h$ should be based on the minimum reaction time that is needed to make, e.g. the rerouting of a train possible, but at the same time as short as possible to give the model access to the most up-to-date information, e.g. $h=60min$ instead of $24h$ as suggested by the data. \\n\\nThus, while the separate problem of **finding an optimal time horizon with theoretical guarantees** is related and can help for some link forecasting tasks where no specific time horizon is required, it is **out of scope for our work**. For further insights into this separate problem, we kindly refer to papers addressing this separate problem:\\n\\nClauset, A. & Eagle, N. Persistence and periodicity in a dynamic proximity network. in Proceedings of the DIMACS Workshop on Computational Methods for Dynamic Interaction Networks (2007).\\n\\nDarst, R. K. et al. Detection of timescales in evolving complex systems. Sci Rep 6, 39713 (2016).\\n\\nPetrovi\\u0107, L. V. et al. Higher-Order Patterns Reveal Causal Timescales of Complex Systems. Preprint at https://doi.org/10.48550/arXiv.2301.11623 (2023).\\n\\nIn general, our discussion with reviewer P4Ga about the task definition of dynamic link prediction and the time horizon $h$ highlights the complexity of the problem itself. Therefore, we believe that by pointing out problems in the current evaluation approaches and by suggesting a replacement that fixes these problems, our work is a valuable contribution that helps to bring temporal graph learning closer to making real-world impact.\"}", "{\"comment\": \"We appreciate the valuable feedback and are glad that you think our paper addresses an \\\"important yet overlooked issue\\\" and provides \\\"valuable insights\\\".\", \"we_agree_that_the_explanation_for_the_increase_in_performance_for_most_non_memory_based_models_on_most_discrete_time_datasets_is_incomplete_and_are_happy_to_provide_further_information_in_the_uploaded_revision_of_our_paper_as_well_as_in_the_following\": \"We observe an increase in performance for our time-window-based approach compared to the batch-based approach for most non-memory-based models on most discrete-time datasets because for $h = 1$ our time-window-based approach uses all edges from one snapshot for each time window preventing overlapping batches that contain edges from multiple snapshots.\\nIn contrast, the batch-based approach creates batches that \\\"stretch across snapshots and discard the temporal ordering of edges from different batches\\\" (lines 311-312).\\nWhen making predictions, only edges that occur _before_ the current batch may be used.\\nHowever, since batches stretching across snapshots contain edges from multiple snapshots, all edges belonging to those snapshots must not be used to make predictions -- because they have not occurred _before_ the batch.\\nTherefore, in the case of overlapping batches, not all available information can be used to make predictions.\\nHowever, with our proposed link forecasting task definition, this is not an issue, resulting in the observed performance increase.\\n\\nThank you for suggesting a more in-depth discussion on the performance trends for continuous-time graphs.\\nIn Appendix C, we include a more in-depth discussion on model performance for link forecasting using realistic time horizons on continuous-time graphs.\"}", "{\"comment\": \"We are grateful for your constructive feedback and are happy to see that you find that the problems we address \\\"are significant but seldom considered and addressed in most existing studies\\\".\\nWe address your comments in the following. \\nNote that due to space constraints, our response is split into multiple comments:\\n\\n### W1\", \"regarding_your_disagreement_with_our_task_definition\": \"We agree with your understanding of the task that \\\"a dynamic link prediction model should be able to predict all the possible edges at a specific timestamp but not within a batch\\\".\\nIndeed, as we state in l. 128-129, \\\"Given time-stamped edges up to time t, the goal of dynamic link prediction is to predict whether an edge (v, u, t+1) exists at future time t+1\\\".\\nWhile this definition is commonly used in recent papers on dynamic link prediction, to the best of our knowledge, none of those works have actually adopted an evaluation procedure that would exactly match this task definition. \\nFor computational efficiency, in practice, edges are split into fixed-size batches.\\nTherefore, simplifications are made for model training and evaluation, i.e. batches that may span many timestamps are used.\\nThe Edges in each batch can then be processed in parallel, but, as a result, negative edges are also sampled on a per-batch basis.\\nHighlighting exactly this mismatch between the task definition and evaluation practices in recent literature is the key contribution of our work.\\nThus, in section 2 of our paper, we formulated the definition of link prediction using a prediction setting for edges per batch rather than per timestamp.\", \"referring_to_your_comment_about_temporal_encodings\": \"We are aware of the temporal encodings that most models employ as mentioned in lines 137-140.\\nHowever, regardless of the model's abilities to encode the timestamp of individual edges, in the process of negative sampling, we nevertheless discard temporal information within batches, which leads to a biased evaluation:\\nThis is because - due to collision checks [1] - if an edge occurs as a positive sample in a batch, it cannot occur as a negative sample in the same batch even if it occurs with a different timestamp.\\nThus, while the prediction is made using the positive sample's timestamp, it is ignored for the remaining duration of the batch during evaluation, essentially discarding the information of when the prediction for the edge was made within the duration of the batch.\\nWe thank the reviewer for highlighting this possible misunderstanding.\\nWe have edited the text in the uploaded revision to clarify this point.\", \"regarding_your_comment_about_fairness\": \"Developing a formal definition of \\\"fairness\\\" for the comparison of dynamic link prediction methods is an interesting suggestion for future research. However, even in the lack of such a formal definition, in our work, we argue that fairness is not given under the current dynamic link prediction setup because the batch size is effectively a free hyperparameter that influences the difficulty of the prediction task. As we illustrate in Fig. 1, choosing different batch sizes can drastically affect the resulting time windows defined by batches. Moreover, we show empirically that real networks exhibit diverse interaction patterns (Fig. 2) and that tuning the batch size has a large effect on the resulting duration of batches (Fig. 3). Consequently, it is expected that model performance changes with the batch size, which makes it possible to tune the task to the chosen model instead of tuning the model to the task. To address this issue, we propose _dynamic link forecating_, which is based on fixing the time horizon to make model performance comparable across different models because the task is kept the same. \\nWe argue that this approach is, in fact, more fair than the practice currently adopted in the community.\"}", "{\"summary\": \"In this paper, the authors considered the evaluation of dynamic link prediction on both discrete-time dynamic graphs (DTDGs) and continuous-time dynamic graphs (CTDGs). They first provided a series of empirical analysis results to demonstrate the limitations of existing batch-based evaluation strategies. A novel time-window-based approach was further proposed to address these limitations. The authors have also validated the effectiveness of the proposed evaluation approach on various public DTDGs and CTDGs datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**S1**. The authors have conducted a great amount of experiments to illustrate the limitations of existing techniques as well as the effectiveness of the proposed method.\\n\\n**S2**. The limitations of the evaluation of dynamic link prediction are significant but seldom considered and addressed in most existing studies.\", \"weaknesses\": \"**W1**. Some statements regarding the research gaps of existing techniques and motivations of this paper are weak, unclear, or confusing.\\n\\n According to my understanding, this study focuses on the evaluation of dynamic link prediction. However, the title of this paper uses the terminology 'temporal graph learning', which may not be consistent with the major topic of this study. From my perspective, learning may also include the training procedure (e.g., training algorithms, training losses, etc.), addition to evaluation of the inference procedure.\\n \\n The author claimed that 'within each batch, edges are typically treated as if they occurred simultaneously, thus discarding temporal information within a batch'. To some extent, I do not agree with this statement. According to my understanding, nodes or edges in most TGNNs are encoded as embeddings (i.e., low-dimensional vector representations), which usually involves the **temporal encodings**. In this sense, the temporal information has not been discarded. It is recommended to give some more toy examples about why and how is the temporal information discarded, especially for the case with temporal encodings.\\n \\n I also respectfully disagree with the definition of dynamic link prediction in Section 2, which was claimed to 'predict whether $(u, v) \\\\in B_i^+$ or $B_i^-$ in terms of batches $B_i^+$ and $B_i^-$. According to my understanding, a dynamic link prediction model should be able to predict all the possible edges at a specific timestamp but not within a batch.\\n \\n The authors claimed that existing techniques may hinder the fair comparison of methods. However, it is unclear for me how to define and measure the fairness of comparison. A formal definition regarding this point is also recommended.\\n\\n From my perspective, using 'temporal link prediction' and 'temporal link forecasting' as two terminologies with different definitions may not be a good presentation, which may result in potential ambiguity issues, since they are more likely to be synonyms in natural languages. It is recommended to use a clearer terminology to replace 'dynamic link forecasting' (e.g., batch-based and window-based evaluation) that can help better distinguish between 'temporal link prediction' and 'temporal link forecasting'.\\n\\n***\\n\\n**W2**. There seems to be some flaws for the proposed evaluation approach.\\n\\n In Section 3.2, the authors discussed one possible limitation that the proposed approach 'cannot preclude memory overflows entirely'. A possible solution was then discussed, which 'splits large time windows into smaller batches for GPU-based gradient computation'. In this sense, the proposed method still used the old bath-based technique, which may still 'ignore the temporal information within each batch', as claimed at the very beginning of this paper.\\n \\n According to my understanding, the proposed method may also suffer from the empty window issue, where there are no edges in a 'pre-defined' window, due to the heterogeneous distribution of temporal edges. However, there are no discussions regarding this limitation and possible solutions.\\n\\n***\\n\\n**W3**. Some details of experiments need further clarification. Some additional experiments are also recommended.\\n\\n In the empirical analysis of this paper, NMI is a significant metric to 'measure the temporal information that is retained after dividing edges into batches'. However, the formal definition regarding how to compute such a measurement is not given. As a result, it is still hard for me to understand the physical meaning of NMI using in the empirical analysis.\\n \\n According to my understanding, the proposed time-window-based approach introduces another hyper-parameter $h$. However, there seems no parameter analysis regarding different settings of $h$ (like the empirical analysis shown in Fig. 3).\\n\\n\\n***\\n\\n**W4**. Although the authors have provided a great amount of empirical results to demonstrate the limitations of existing techniques and validate the effectiveness of the proposed approach, it is better to provide some theoretical guarantees w.r.t. the evaluation of dynamic link prediction.\", \"questions\": \"See **W1**-**W4**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and your clarification regarding W3, as we apparently misunderstood your comment. Following your suggestion, we re-examined our experimental results to find empirical evidence that supports our claims. We report our findings in the following as well as the runtime of both evaluation approaches on some of the datasets.\\n\\n### W3\\n\\nThe results reported in Tables 3 and 4 show average AUC-ROC scores for each batch or window, which is the common way to report results when evaluating dynamic link prediction and the way it is implemented in DyGLib. To provide insights into the reasons behind the performance changes between both approaches, instead of taking the average, we visualize the AUC-ROC for each individual batch and show the AUC-ROC of each model for the discrete-time UN Trade dataset in the following:\", \"https\": \"//ibb.co/VL0r0hR\\n\\nDuring the second half of the observation period, our approach leads to more time windows compared to the number of batches. For our window-based approach, most models perform better during this period compared to their performance in the first half. This increase in performance during the second half of the observation period is masked by the batch-based evaluation, which aggregates this long period of time in a very small number of batches.\\n\\nThe plot further highlights that the batch-based evaluation is affected by an anomaly at hour 28356, where the performance of all models drops substantially. At this time, one employee sent 1705 emails at once, resulting in 1705 temporal edges with identical time stamps that are evaluated in 9 consecutive batches (which are again subject to the information leakage issue discussed above). In contrast, our approach only uses a single time window for these 1705 edges with identical time stamps.\\n\\nIn summary, we believe that these further insights substantiate our claims and we will be happy to include them in the camera-ready version (along with the same analysis for the other data sets).\"}", "{\"summary\": \"This work introduces a reformulation of the dynamic link prediction (DLP) task as link forecasting (LF). LF differs from DLP in that LF ensures that each batch corresponds to a fixed time resolution. This small change to the problem statement yields relatively significant changes to the observed accuracies of these models on benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem with variable-time batch construction for DLP is made quite clear in section 3. The examples are easy to understand\\n2. The experiments in section 4 are nearly exhaustive. The models cover the big TGNNs out there, and the datasets cover many of the major datasets out there.\\n3. The problem statement is clearly articulated and easy to understand. I had no problem implementing the discrete time version of the data loader in an afternoon of work. This clarity is commendable.\", \"weaknesses\": \"1. While the problem with DLP seems clear, it is hard for me to see why this requires the definition of a new task given that it's a straightforward modification of an existing one. I would recommend that the authors expand on this in the work to further draw the distinction, _or_ to assert that this is the way that DLP should be done in the future.\\n3. It appears that the models were not hyperparameter tuned for this new task. It stands to reason that because the new involves training on potentially quite different batch structures than the batches those hyperparameters were tuned for, that the reported model performance might be an underestimate. I would suggest that the authors investigate the hyperparameter sensitivity of the models training using LF.\\n4. The experimental details are a bit hard to follow in section 4. It appears that what was done is the authors trained a model using the horizon-based strategy, and then used this model to evaluate on both the horizon and batch based evaluation strategies. They then reported the performance differences. I would ask that the authors clarify these experimental details.\", \"questions\": \"1. It's unclear to me what the authors mean when they assert that DLP yields different amounts of information loss for different models. It seems to me that, if the batch size is fixed across all models, then it seems to me that the information loss (or really, the inter-batch leakage) will be equivalent. In what ways are they not? Or is the assertion that because batch size is often treated as a (often implicit) hyperparameter, so model comparisons can be implicitly unfair?\\n2. Does LF suffer from the same temporal training-leakage that DLP does when h is larger than infinitesimal?\\n3. Which is more significant? Batches containing variable time gaps, or the identified temporal leakage?\\n4. The primary motivation to use larger batch sizes is the reduction of wallclock time during training. How does the training time for LF scale as a function of h?\\n5. How do the results reported in table 2/3 change if the model was trained using a DLP pipeline and then evaluated on both DLP/LF?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"### Minor\\n\\n1. Yes, this is a typo. Thank you for catching it. We fixed it in the uploaded revision.\\n2. Thank you for the suggestion. This, indeed, improves the clarity of the definitions and is included in the uploaded revision.\\n3. Thank you for this question. We added a clarification in the uploaded revision. The sentence is now as follows: We use $W_i^-$ to denote a sample of $|W_i^+|$ negative edges that are sampled using one of the negative sampling approaches described in \\\\cref{app:negative_sampling} and do not occur as positive edges in time window $[i \\\\cdot h,(i+1) \\\\cdot h)$, i.e.\\\\ $W_i^- \\\\cap W_i^+ = \\\\emptyset$.\\n4. Thank you for this interesting idea. We are very much in favor of including such an appendix, especially since some of the problems we address are rather technical and depend on the specific implementation details of each method or training strategy. However, going through the implementations of all related work is not feasible in the remaining time of the rebuttal. We, thus, propose to include such an appendix in the camera-ready version.\\n5. We respectfully disagree that explanations about the specific implementation that we used should be included in the paper. We argue that the paper itself should be formulated in a general way that makes it possible to implement our approach in any available software framework. We will, however, release a de-anonymized GitHub repository with descriptions and documentation that make it easy to see our modifications compared to the framework our code is based on and provide the link in the camera-ready version.\\n\\n### References\\n\\n[1] Shenyang Huang et al. Temporal Graph Benchmark for Machine Learning on Temporal Graphs. In NeurIPS, 2023.\"}", "{\"comment\": \"Thank you for the authors' response. I appreciate the proposed setting and extensive empirical results presented in this paper. As a result, I have increased my score to 6.\"}", "{\"title\": \"Thanks for your further clarifications\", \"comment\": \"I appreciate the authors for their further clarifications of (1) deeper insights regarding the flaws of the current evaluation protocol and (2) preliminary experiment results about runtime. I believe these results would largely increase the soundness of this research; therefore, I'm inclined to increase my contribution score from 1 to 2, my soundness score from 2 to 3, and an overall rating from 3 to 5.\\n\\nPlease make sure to integrate our discussions during the rebuttal period into the camera-ready version.\"}", "{\"comment\": \"Thank you for your detailed review and for highlighting that our paper \\\"provides a novel perspective on the internal issues with the batch-oriented evaluation protocol\\\".\", \"we_respond_to_your_identified_points_for_improvement_in_the_following\": \"### W1\\n\\nWe respectfully disagree that the results in Figure 2 (histogram of edge activities) and 3 (batch durations for different batch sizes) are trivial. While the results are indeed intuitive to understand, we still believe that investigating the characteristics of temporal graphs shown in Figures 2 and 3 is crucial to highlight the potential biases and pitfalls in batch-based evaluations, which is why we included them.\\nIf the reviewer strongly disagrees with this point, we could possibly move Figure 3 to the appendix but would prefer to keep it in the main text.\\n\\n### W2\\n\\nThank you for pointing this out, we have added a more detailed explanation in the form of additional Appendix G to the uploaded revision of our paper.\\n\\n### W3\\n\\nWe thank the reviewer for these crucial questions, which we are happy to clarify below.\\nThe goal of our work is to facilitate a more realistic evaluation and a fairer comparison between different models by fixing problems with the prediction task. What do we mean by this? The current formulation of dynamic link prediction allows tuning the batch size as an implicit hyperparameter that - as we show in our work - critically affects model performance. But, as we also show in our work, changing the batch size fundamentally changes the characteristics of the prediction problem: the resulting batches span different lengths and the NMI between edges' timestamps and their batch numbers change. Moreover, we argue that the real-world characteristics of a link prediction setting suggest a sensible time horizon (we suggest horizons for the used datasets in Appendix C). In that sense, we argue that the task definition provided in our work (dynamic link forecasting) is \\\"better\\\" as it more closely reflects reality and is thus consistent across different models.\\n\\nMoreover, our experiments uncover a previously unseen bias toward memory-based models due to the information leakage that we identified in our work. Using a batch-based training pipeline and then evaluating the trained models on both the batch-based and our time-window-based approach, we observe a substantial drop in performance for memory-based models using our evaluation approach compared to the batch-based strategy. This drop in performance validates the identified information leakage that is fixed using our evaluation strategy.\\n\\n### W4\\n\\nThank you for this idea. We agree that reporting the runtime of our time-window-based approach and the runtime of the batch-based approach will improve our paper. Sadly, we did not record the runtime during our initial experiments and it is not feasible to rerun all experiments during the rebuttal. However, we propose to include them in the camera-ready version.\\n\\n### W5\\n\\nWe can see that our explanation in the limitations section (lines 528-529) is rather vague and we are happy to provide more details in the uploaded revision as well as below.\\nRanking-based approaches typically sample a large number (e.g. 100 for \\\"tgbl-review\\\" [1]) of negative samples for each positive sample.\\nCommonly, this combination of one positive and many negative samples for this positive edge is then used as a batch.\\nWhile the problem of information leakage remains because there might still be multiple batches with the same timestamp, using only one positive sample per batch alleviates the information loss since each batch only comprises the single timestamp of the positive edge.\\nThis leads to a better estimation of the models' precision but also increases the runtime of the evaluation by a large amount.\\nOur time-window-based approach is much faster because it uses fewer negative samples.\\nThe model's precision, on the other hand, may not be as accurate.\\nHowever, we argue that this is a reasonable simplification since in real-world scenarios the temporal resolution of the available data is often not the same as the one that is necessary for the prediction, e.g. while data on past purchases for each customer might be available in a per-second resolution it is enough to provide customer recommendations every hour or day.\"}", "{\"comment\": \"I appreciate the authors' responses and revisions, which address some of my concerns. However, some of my concerns remain, especially for the **interpretations about fairness**, which is also related to the **parameter setting of $h$**.\\n\\nIn the response to W1, The authors argued that most existing evaluation strategies contain a hyper-parameter of batch size, whose value may significantly affect the evaluation results. It is a significant observation. Although the authors argued that $h$ in this paper is not a hyper-parameter (in their response to W3), I still respectfully disagree with it. As I can check in Appendix C of the revised paper, **$h$ seems to still be set based on some intuitive background knowledges** (e.g., the dynamics of Reddit are fast, so $h$ was set to 15min). Some of these intuitions are also unclear (e.g., why we set $h = 15min$ but not $h = 10min$ due to the fast dynamics of Reddit). In this sense, **$h$ still plays a role similar to hyper-parameter**. Thus, **we cannot highlight that the proposed method is more fair than existing techniques**, from my perspective.\\n\\nIn my expectation, $h$ should be a deterministic unique value calculated by an equation regarding characteristics of the prediction task. In this sense, **formal definitions and theoretical guarantees are**, to some extent, **necessary** in making such a contributions. However, as claimed by the authors, they could not give related definitions and theoretical analysis.\\n\\nMy another concern is about the definition of NMI. I know that NMI is a widely-used metric but I believe that different tasks may have different definitions regarding this metric. For instance, NMI of community detection should be different from that in this paper. People familiar with community detection may not be familiar with the evaluation of temporal link prediction. It is recommended to give the formal definition of NMI using the notations in this paper (e.g., &B_i^+& and &B_i^-&) rather than just using the high-level notations of X and Y in Appendix G of the revised paper.\\n\\nTherefore, I keep my score.\"}", "{\"title\": \"Thanks for clarifications on the minors\", \"comment\": \"Thank you for your clarifications on the minor weaknesses. In my opinion, most of these can be easily addressed and do not impact my overall evaluation of the paper. Regarding point 5, specifically whether to explicitly discuss the implementation in the paper, I have seen both approaches\\u2014some papers include detailed implementation discussions, while others do not. I appreciate that the authors have provided the source code. Highlighting the modifications either within the paper or in the README/documentation of the source code would both be effective ways to help readers better understand the implementation. I believe the authors are free to choose the approach that best fits their work.\"}", "{\"comment\": \"### W2\", \"regarding_your_comment_that_splitting_each_time_window_further_into_batches_leads_to_the_same_problems_as_a_batch_based_evaluation\": \"In a sense this is correct.\\nHowever, as we point out in lines 382-391, our approach does not \\\"preclude information loss entirely\\\", but instead \\\"controls the information loss\\\" ensuring that this loss of temporal information is consistent for all time windows even if the number of links in certain time windows exceeds the memory of the used GPU.\\nCrucially, we require that for all batches that correspond to the same time window not only the temporal information within each batch is discarded but also across all batches from this time window.\\nThus, we deliberately discard information inside each time window which instead of preventing intra-time-window information loss, prevents information leakage.\\nThank you for pointing this out, we clarified it further in the uploaded revision.\", \"regarding_your_comment_about_time_windows_with_no_edges\": \"Yes, that is correct. It is possible that time windows are empty; in this case, the correct prediction would be that no interactions occur during that time window. Could you elaborate on why this would be an issue or why it would be expected that interactions should occur during each time window?\\n\\n### W3\", \"regarding_your_comment_about_nmi\": \"Normalized Mutual Information (NMI) is a common information-theoretic measure [2], which is often used in the context of clustering and community detection to compare two different clusterings. \\nIt is based on mutual information, which for two random variables $X$ and $Y$ captures the bits of information that we gain about the outcome of $Y$ if we know the outcome of $X$ and vice-versa.\\nThe specific value of mutual information depends on the entropy of the underlying random variables, and is thus difficult to compare across different settings. \\nTo address this issue, normalized mutual information provides a measure between zero and one that is normalized based on the entropies of the underlying random variables.\\nIn the context of our work, we use the NMI to capture the loss of information in batch-based evaluations, or - in other words - how much bits of information about the timestamps of edges we gain based on the batch number only.\\nWe use the standard implementation in the Python library scikit-learn [3] to compute NMI.\\nWe note that mutual information-based measures have frequently been used to evaluate information loss that is due to time aggregation in temporal graphs [4,5]. We have added a more detailed explanation in the form of an additional Appendix G to the uploaded revision.\\n\\nRegarding your comment about \\\"hyper-parameter\\\" $h$:\\nWe argue that the horizon h should not be treated as a hyperparameter. Instead, it should be determined by the characteristics of the prediction task, see section 3.2. In Appendix C, we also suggest realistic values for h for the considered datasets. However, we added a plot (Figure 6 in the uploaded revision) that showcases the number of links per time window for various time horizons, similar to Fig. 3, in the appendix.\\n\\n### W4\\n\\nMay we ask what kind of properties the reviewer has in mind? It may be difficult, if not impossible to provide general guarantees that hold for all possible models and datasets because we are proposing an evaluation approach, not a specific model or metric. We are also not aware of any theoretical guarantees in the context of the dynamic link prediction task.\\n\\n### References\\n\\n[1] Farimah Poursafaei, Shenyang Huang, Kellin Pelrine, and Reihaneh Rabbany. Towards Better Evaluation for Dynamic Link Prediction. In NeurIPS, 2022.\\n\\n[2] Vinh et al., Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance, J. Mach. Learn. Res. 11 (3/1/2010), 2837\\u20132854.\\n\\n[3] https://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html\\n\\n[4] Pfitzner, R., Scholtes, I., Garas, A., Tessone, C. J. & Schweitzer, F. Betweenness Preference: Quantifying Correlations in the Topological Dynamics of Temporal Networks. Phys. Rev. Lett. 110, 198701 (2013).\\n\\n[5] Weng, T., Zhang, J., Small, M. et al. Memory and betweenness preference in temporal networks induced from time series. Sci Rep 7, 41951 (2017). https://doi.org/10.1038/srep41951\"}", "{\"comment\": \"I thank the authors for their responses. They've provided clear answers and with their answers, i feel that this is a valuable contribution to the temporal graph learning literature. I've increased my score accordingly.\"}", "{\"title\": \"Thanks for your clarifications\", \"comment\": \">W1 Figure 2 and 3\\n\\nThis is a relatively minor point and does not significantly impact my overall evaluation of the manuscript. While the authors are free to decide what to include in the main text, I suggest that it may not be necessary to display all datasets there. Instead, selecting one or two representative examples for the main text and moving the rest to the appendix could improve readability and focus.\\n\\n> W2 NMI explanation\\n\\nThanks for having an appendix section to explain what NMI is for common readers I think it is pretty clear now.\\n\\n> W3 Comparison with previous evaluation protocol.\\n\\nI appreciate the clarity of the overall story and argument presented in this manuscript. Your arguments intuitively make sense to me, but I wonder if they could be further strengthened with some experimental evidence to validate the intuition scientifically. For instance, a case study of several algorithms could demonstrate how they exploit or \\\"hack\\\" the previous protocol. Evidence might include intermediate results (e.g., values prior to the final AUC-ROC) showing discrepancies or abnormalities, or unusual observations made during evaluations using the previous protocol\\u2014potentially highlighting how your research idea for this paper was identified.\\n\\n> W4 Efficiency\\n\\nI understand that the rebuttal period is significantly shorter than the time it took to complete this research. However, I would appreciate it if the authors could provide some sample results\\u2014there\\u2019s no need to report all results, but sharing a subset would help clarify and strengthen the argument of efficiency.\\n\\n> W5 pros/cons of time-window-based approaches and ranking approaches\\n\\nThank you for your clarifications. Please ensure these points are integrated into the camera-ready version.\\n\\nThe rebuttal effectively addresses some of my concerns (W1, W2, and W5). However, the remaining issues (W3 and W4) could benefit from further elaboration, with W3 being the most critical in my view. At this stage, I am inclined to increase my score to 5, but I will make a final decision at the end of the rebuttal period.\"}" ] }
5J9B7Sb8rO
Quantized Spike-driven Transformer
[ "Xuerui Qiu", "Malu Zhang", "Jieyuan Zhang", "Wenjie Wei", "Honglin Cao", "Junsheng Guo", "Rui-Jie Zhu", "Yimeng Shan", "Yang Yang", "Haizhou Li" ]
Spiking neural networks (SNNs) are emerging as a promising energy-efficient alternative to traditional artificial neural networks (ANNs) due to their spike-driven paradigm. However, recent research in the SNN domain has mainly focused on enhancing accuracy by designing large-scale Transformer structures, which typically rely on substantial computational resources, limiting their deployment on resource-constrained devices. To overcome this challenge, we propose a quantized spike-driven Transformer baseline (QSD-Transformer), which achieves reduced resource demands by utilizing a low bit-width parameter. Regrettably, the QSD-Transformer often suffers from severe performance degradation. In this paper, we first conduct empirical analysis and find that the bimodal distribution of quantized spike-driven self-attention (Q-SDSA) leads to spike information distortion (SID) during quantization, causing significant performance degradation. To mitigate this issue, we take inspiration from mutual information entropy and propose a bi-level optimization strategy to rectify the information distribution in Q-SDSA. Specifically, at the lower level, we introduce an information-enhanced LIF to rectify the information distribution in Q-SDSA. At the upper level, we propose a fine-grained distillation scheme for the QSD-Transformer to align the distribution in Q-SDSA with that in the counterpart ANN. By integrating the bi-level optimization strategy, the QSD-Transformer can attain enhanced energy efficiency without sacrificing its high-performance advantage. We validate the QSD-Transformer on various visual tasks, and experimental results indicate that our method achieves state-of-the-art results in the SNN domain. For instance, when compared to the prior SNN benchmark on ImageNet, the QSD-Transformer achieves 80.3\% top-1 accuracy, accompanied by significant reductions of 6.0$\times$ and 8.1$\times$ in power consumption and model size, respectively. Code is available at https://github.com/bollossom/QSD-Transformer.
[ "Spiking Neural Network+Spike-driven+Quantized Spiking Transformer+ Neuromorphic Computing" ]
Accept (Poster)
https://openreview.net/pdf?id=5J9B7Sb8rO
https://openreview.net/forum?id=5J9B7Sb8rO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xPbQltwJpQ", "u0o4gumeUt", "l7dU8RBRXA", "hGPlXH7pEJ", "Znr7YU5mHb", "WlvrnBPVU8", "U5aBVYGogN", "RP7ECBsF6Y", "R2IeFpSNqt", "Qd1UCAiTdG", "HJL4c39ayM", "GUme3Nmyjr", "Ct4PhdbZC2", "AEINbElUAW" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732157473341, 1730603142641, 1730438330310, 1730601890293, 1730103661228, 1731466652049, 1732470947959, 1732152279273, 1731995249457, 1732393142282, 1737523395916, 1731995318174, 1731987576795, 1734615630958 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission436/Authors" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_pkTr" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_Uosk" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_iFNE" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_ynnK" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_xmWz" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_pkTr" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_iFNE" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_ynnK" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_pkTr" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_ynnK" ], [ "ICLR.cc/2025/Conference/Submission436/Reviewer_iFNE" ], [ "ICLR.cc/2025/Conference/Submission436/Area_Chair_stbR" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer ynnK,\\n\\nWe would like to express our deepest gratitude for your constructive feedback and the goodwill you've shown towards our work.\\n\\nThank you for your response. Your comments are truly inspiring and have provided invaluable guidance for improving our paper. We have added the total training time for QSD-Transformer and SpikeZIP-TF on ViT-S. \\n\\nIt is with sincere hope that our responses and corrections have satisfactorily resolved the issues you raised. Should there be any further questions or clarifications you require, please do not hesitate to contact us directly. We are more than willing to engage in further discussions to enhance the quality of our work to your satisfaction.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper presents QSD-Transformer, a quantized spike-driven transformer that addresses the challenge of implementing efficient spiking neural networks (SNNs) while maintaining performance. The work shows three key points: (1) a lightweight quantized baseline for spike-driven transformers, (2) a bi-level optimization strategy to address spike information distortion (SID), and (3) information-enhanced LIF (IE-LIF) neurons with fine-grained distillation for improved performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"First approach to quantizing spike-based transformers with SID analysis\", \"Solid theoretical analysis and proofs for the proposed methods\", \"Competitive accuracy on ImageNet (80.3%) with low energy\", \"Extensive experiments on multiple tasks (classification, object detection, segmentation)\"], \"weaknesses\": [\"Lack of comparison with the previous SNN and quantized ANN transformer models (Connected to question #1)\", \"Limited scalability in ImageNet experiments (Connected to question #2)\", \"Huge training overhead compared to the conventional spike-based transformer due to multi-bit spike and knowledge distillation (Connected to question #3)\"], \"questions\": \"1. To make the ImageNet results table solid, authors can add additional results such as QKFormer [1] and previous QAT-ViT models [2].\\n2. I just wondered why only small sizes of architecture (1.8M, 3.9M, 6.8M) are used for training the ImageNet dataset. Is there any scalability issue with this method?\\n3. This work uses multi-bit spikes during training and knowledge distillation with ANN architecture, which causes huge training overhead in training time and memory. Can you present any analysis of this overhead?\\n4. In the transfer learning section, the overall information is insufficient. Which bit-width did you use? and could you provide us the accuracy of CIFAR10/100, and CIFAR10-DVS without transfer learning?\\n5. Can the authors provide the firing rate information? Compared to the original Spike-driven Transformer-V2, how has the firing rate changed in the self-attention part?\\n\\n[1] Zhou, Chenlin, et al. \\\"QKFormer: Hierarchical Spiking Transformer using QK Attention.\\\" arXiv preprint arXiv:2403.16552 (2024).\\n[2] Li, Yanjing, et al. \\\"Q-vit: Accurate and fully quantized low-bit vision transformer.\\\" Advances in neural information processing systems 35 (2022): 34451-34463.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a quantized spike-driven transformer and increases the accuracy of the SNN by proposing an information-enhanced LIF model and fine-grained distillation from the lower level and upper level respectively. Experiments show that these technologies reduce energy consumption significantly while increasing the accuracy of SNN.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The IE-LIF neuron combines the conversion algorithm and training algorithm, which is novel.\\n3. The experimental results are significant.\", \"weaknesses\": \"1. The training of IE-LIF neurons does not utilize temporal information, which is not suitable for temporal benchmarks.\\n2. The reason why the IE-LIF neuron and fine-grained distillation that reduces the energy consumption is not provided.\", \"questions\": \"1. Why do the IE-LIF neuron and fine-grained distillation that reduce the energy consumption? Do these technologies reduce the number of synaptic operations?\\n2. Why the energy reduction in the COCO2017 dataset is not significant?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To tackle the issues of high-parameter spike-based Transformer in resource-constrained applications and the low performance of directly quantized spike-based Transformers, this paper introduces a Quantized Spike-driven Transformer. It uses a bi-level optimization strategy, including an Information-Enhanced LIF neuron and fine-grained distillation, to counteract quantization-induced performance degradation. The comparative and ablation experiments demonstrate the effectiveness of the proposed methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Based on Information entropy, this paper proposes a bi-level optimization strategy, which mitigates quantization-induced performance drops in baseline quantized spiking Transformers.\\n\\n2. The proposed IE-LIF spike neuron is hardware-friendly and converts to a binary-spiking LIF neuron during inference to maintain spike-driven nature.\\n\\n3. Experimental results on ImageNet and a large number of vision tasks (detection, segmentation, and transfer learning) show that the method is effective and energy-efficient on various spike-based transformers.\", \"weaknesses\": \"1. The authors describe the implementation method of surrogate gradients for binary spikes in the appendix. However, the proposed IE-LIF in the main text is multi-bit. Could the authors explain how the aforementioned surrogate gradients are deployed in the proposed neurons?\\n\\n2. Fast-SNN [1] converts quantized ANNs to SNNs and is a competitive ANN-to-SNN method. Like this paper, it aims to reduce quantization error and employs quantization techniques to enhance energy efficiency. The basic idea of Fast-SNN is to reduce both quantization error and accumulating error, achieving excellent performance on many vision tasks (classification, detection, and segmentation). Could you include comparative experimental results with Fast-SNN?\\n\\n3. There are some typos in the text, such as the equation (1) on line 164 and on line 292, it seems you intended to reference Fig 1(b). \\n\\n[1] Hu, Yangfan, et al. \\\"Fast-SNN: fast spiking neural network by converting quantized ANN.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 12, pp. 14546-14562, Dec. 2023, doi: 10.1109/TPAMI.2023.3275769.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author proposed Quantized Spike-Driven Transformer(QSD-Transformer) to tackle with the spike information distortion (SID) challenge resulted from quantized spike-driven self-attention (Q-SDSA). The author addressed the problems through two levels: 1) Information-Enhanced LIF(IE-LIF) neuron to rectify the information distribution in Q-SDSA at the lower level. 2) A fine- grained distillation scheme for the QSD-Transformer to align the distribution in Q-SDSA with that in the counterpart ANN at the upper level. QSD-Transformer achieved promising results on multiple computer vision tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Extensive experiments and ablation studies on Image Classification, Object Detection and Semantic Segmentation.\\n2. The proposed Information-Enhanced LIF(IE-LIF) neuron is effective to rectify the information distribution in Q-SDSA through Information Theory.\\n3. Clear writing and methodology.\", \"weaknesses\": \"1. The comparison between ANN2SNN and Direct Training methods is limited. Currently, MST is no longer the SOTA method on ANN2SNN method. SpikeZIP-TF (ICML 2024) [1] and ECMT (ACM MM 2024) achieve better performance on Image Classification tasks. The performance of SpikeZIP-TF and ECMT on ImageNet surpass QSD-Transformer by a large margin. In addition, ANN2SNN methods has advantage on saving computational resources compared with Direct Training methods. It is recommended that the author should conduct a more comprehensive comparison between those two methods.\\n2. The method proposed by the author is somewhat cumbersome, did the training time provided by the authors in appendix include the training time of FGD?\\n3. It is recommended that the author should extend the method to NLP tasks to verify the transferability of QSD-Transformer.\\n\\n[1] Kang You*, Zekai Xu* et al. SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN. International Conference on Machine Learning 2024\\n[2] Zihan Huang, Xinyu Shi et al. Towards High-performance Spiking Transformers from ANN to SNN Conversion. ACM Multimedia 2024\", \"questions\": \"1. I wonder whether you quantized the membrane potential or not. If you didn't quantize the membrane potential, it seems hard to implement your method on hardware.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a Quantized Spike-Driven Transformer, addressing the spike information distortion during quantization caused by the bimodal distribution of Quantized Spike-Driven Self-Attention (Q-SDSA). A bi-level optimization strategy is introduced, incorporating information-enhanced LIF and fine-grained distillation to rectify the distribution of Q-SDSA.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper aims to address the problem of performance degradation of the Spike Transformer after quantization, attributing it to the spiking information distortion (SID) problem. The authors presents a loss function based on mutual information maximization to tackle the problem.\\n2. The authors conduct experiments across various tasks to demonstrate the effectiveness of the proposed method.\\n3. The paper is well organized and easy to follow.\", \"weaknesses\": \"1. The primary reason for the improved quantization performance of the proposed method is the use of multi-bit spikes instead of traditional 0-1 spikes. Specifically, the implementation extends to 4 virtual timesteps, which inevitably reduces the training and inference efficiency. However, the manuscript does not provide a detailed explanation or analysis of this trade-off, which would be beneficial for understanding the overall impact on efficiency.\\n2. The empirical comparison can be done in a more thoroughly by comparing with other latest state-of-the-art methods.\\n3. Some content in the paper seems unnecessary, such as Appendix A, which does not contribute significantly to the main arguments or findings and could be omitted for conciseness.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate your detailed explanation. I decided to raise my score from 5 to 6.\"}", "{\"comment\": \"Many thanks to the author's rebuttal for solving my problem. So I'm willing to raise my score.\"}", "{\"comment\": \"Could you give the comparison between the total training time of QSD-Transformer and SpikeZIP-TF on ViT-S?\"}", "{\"comment\": \"I appreciate your response and extra experiments. Most of the concerns have been addressed, but I still have one question.\\n\\n- Your new results in Table 1 are on the ImageNet dataset, and training ImageNet takes some time. Are the new results (QKFormer and ViT-S) trained from scratch? or fine-tuning.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"The authors have addressed most of my concerns and I decided to raise my score.\"}", "{\"title\": \"Response to authors' rebuttal\", \"comment\": \"I appreciate the authors' thorough response, which has addressed some of my concerns. However, I still have further questions and comments.\\n1. Can the training efficiency be improved with this approach when compared with the original spiking transformer?\\n2. Is it feasible for this method to be effectively applied to NLP tasks?\"}", "{\"metareview\": \"This paper proposes the QSD-Transformer which is a quantized framework for spike-based Transformers by introducing a bi-level optimization strategy, incorporating information-enhanced LIF, and fine-grained distillation to rectify the attention distribution. Most of the reviewers have positive comments on this work and part of them raised their score after the response. Thus, this paper can be accepted and please prepare the final version well.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer xmWz (Point Raised):\\nConcern over the potential reduction in training and inference efficiency due to the extension to 4 virtual timesteps.\\nAuthors\\u2019 Response:\\nClarified that the use of multi-bit pulses within a single time step during training (IE-LIF) and their conversion to binary pulses during inference reduces memory requirements and improves training speed. Provided data showing a 3.2\\u00d7 speedup in training and a 6.1\\u00d7 reduction in memory usage.\\n\\nReviewer xmWz (Point Raised):\\nSuggestion for a more thorough empirical comparison with other state-of-the-art methods.\\nAuthors\\u2019 Response:\\nIncluded results comparing QSD-Transformer with the latest state-of-the-art method, QKformer, demonstrating the effectiveness of their approach.\\n\\n\\nReviewer pkTr (Point Raised):\\nLack of comparison with previous SNN and quantized ANN transformer models.\\nAuthors\\u2019 Response:\\nAdded comparisons with QKFormer and QAT-ViT models to strengthen the empirical evidence.\\n\\nReviewer pkTr (Point Raised):\\nConcern about the scalability of the method, given the use of small architectures for ImageNet experiments.\\nAuthors\\u2019 Response:\\nExplained that the small architectures were the result of quantizing larger spike-based Transformer models and that the largest quantization baseline used was 55M, demonstrating scalability. Plans to apply the method to even larger models in the future were mentioned.\\n\\n\\nReviewer pkTr (Point Raised):\\nLarge training overhead due to multi-bit spikes and knowledge distillation.\\nAuthors\\u2019 Response:\\nProvided analysis showing that IE-LIF and FGD techniques did not increase training overhead and in fact reduced it compared to traditional spike-based Transformers.\\n\\nReviewer pkTr (Point Raised):\\nInsufficient information in the transfer learning section, specifically about bit-width and accuracy without transfer learning.\\nAuthors\\u2019 Response:\\nClarified the bit-width used and provided accuracy results for CIFAR10/100 and CIFAR10-DVS with direct training, showing high accuracy.\\n\\nReviewer pkTr (Point Raised):\\nRequest for firing rate information and changes in the self-attention part compared to the original Spike-driven Transformer-V2.\\nAuthors\\u2019 Response:\\nApologized for the oversight and presented the changes in the firing rates of the attention mechanism modules before and after quantization.\", \"reviewer_xmwz\": \"Feasibility of applying the method to NLP tasks.\\nAuthors\\u2019 Response:\\n\\nConfirmed the feasibility and added experiments on NLP tasks using the QSD-Transformer quantization framework, with comparisons based on Spikezip and SpikeBERT.\", \"reviewer_pktr\": \"Limited comparison between ANN2SNN and Direct Training methods, with other methods outperforming QSD-Transformer.\\nAuthors\\u2019 Response:\\nAddressed the concern by including a comparison with state-of-the-art ANN2SNN methods like SpikeZip and ECMT, and demonstrated the application of QSD-Transformer to models like Spikezip, showing improved results.\\n\\n\\n\\n\\uf0d8The authors effectively addressed the concerns and questions raised by the reviewers, providing additional experimental results and clarifications that strengthened the manuscript.\\n\\uf0d8The decision to accept the paper was influenced by the authors\\u2019 ability to show improved training efficiency, the feasibility of applying their method to NLP tasks, and the suitability of their approach for temporal benchmarks.\\n\\uf0d8The inclusion of comparative results with Fast-SNN and other state-of-the-art methods demonstrated the robustness of the QSD-Transformer quantization framework.\\n\\uf0d8The decision also considered the overall contribution of the work to the field and the clarity of the responses to the reviewers\\u2019 points.\"}" ] }
5IvTw0qMKj
C$^{2}$INet: Realizing Incremental Trajectory Prediction with Prior-Aware Continual Causal Intervention
[ "Xiaohe Li", "Feilong Huang", "Zide Fan", "FLM", "Leilei Lin", "Yingyan Hou", "Lijie Wen" ]
Trajectory prediction for multi-agents in complex scenarios is crucial for applications like autonomous driving. However, existing methods often overlook environmental biases, which leads to poor generalization. Additionally, hardware constraints limit the use of large-scale data across environments, and continual learning settings exacerbate the challenge of catastrophic forgetting. To address these issues, we propose the Continual Causal Intervention (C$^{2}$INet) method for generalizable multi-agent trajectory prediction within a continual learning framework. Using variational inference, we align environment-related prior with the posterior estimator of confounding factors in the latent space, thereby intervening in causal correlations that affect trajectory representation. Furthermore, we store optimal variational priors across various scenarios using a memory queue, ensuring continuous debiasing during incremental task training. The proposed C$^{2}$INet enhances adaptability to diverse tasks while preserving previous task information to prevent catastrophic forgetting. It also incorporates pruning strategies to mitigate overfitting. Comparative evaluations on three real and synthetic complex datasets against state-of-the-art methods demonstrate that our proposed method consistently achieves reliable prediction performance, effectively mitigating confounding factors unique to different scenarios. This highlights the practical value of our method for real-world applications.
[ "Trajectory Prediction", "Causal Intervention", "Variational Inference", "Continual Learning" ]
Reject
https://openreview.net/pdf?id=5IvTw0qMKj
https://openreview.net/forum?id=5IvTw0qMKj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y7dOqC4pDz", "uaW92C5wqm", "u2mfNzSV86", "u1rCCl1iOO", "td1jwFTHOj", "pd7F2eDeny", "lTHNOP5BO0", "htfONjC0Xg", "aX092PvAeC", "LXoBtqsWhM", "HhzimQlBvP", "ABkiJgbCWz", "8oJxiTmz3H", "3D4fACmsLN" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review" ], "note_created": [ 1731942211054, 1731911919576, 1731912005053, 1730689225153, 1731942241567, 1731863705994, 1732559848511, 1734952755822, 1732594732152, 1731911969987, 1729669650132, 1731863674519, 1737524055019, 1730263840970 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Reviewer_Bhsa" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Area_Chair_gNez" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Submission10454/Reviewer_zjLH" ], [ "ICLR.cc/2025/Conference/Submission10454/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10454/Reviewer_Nzn5" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer Bhsa Part1\", \"comment\": \"Thank you very much for your positive feedback. Based on your comments, we will further refine our work, and now I will address the issues you have raised.\", \"q1\": \"Could the authors elaborate on how the proposed C2INet model could be implemented in practical real-world systems, especially where computational resources may be limited (e.g., embedded or low-power devices)?\\\\\", \"a1\": \"Thanks for your constructive question. The goal of C2Inet is to be applied in real-world scenarios. Our mechanism is akin to a plug-in designed to enhance the training effectiveness of trajectory prediction models, focusing primarily on optimizing the continuously added priors $\\\\hat{P}^{*}$ and their weights $\\\\alpha$ in variational inference. We have proposed two training modes for different application scenarios: online and offline. The online mode is selected when device resources are limited or access to the entire dataset is limited. In hardware devices, only a tiny amount of storage space is needed to store the task-related prior components that are continuously iterated (our proposed pruning strategy can ensure this).\\\\\\nMoreover, optimizing a few parameters will not require too many computational resources. This mode supports the actual vehicle and continuously optimizes the model's performance in collecting extensive road data, which significantly benefits real-world application scenarios. In our future work, we will also attempt to validate it on actual roads. The offline strategy involves unified local offline training after collecting complete data. Its parameter quantity is multiple, which relates to memory capacity compared to the online mode. However, due to the limited queue capacity, its computational load is also completely acceptable in real-world applications.\", \"q2\": \"Can the authors provide a more simplified explanation of the theoretical basis, particularly the optimization of KL divergence and the adjustment of priors in multi-task scenarios? This would help make the paper more accessible to a broader audience.\\\\\", \"a2\": \"Thank you for your feedback. We apologize for the previous explanation not being comprehensive enough. We have supplemented the derivations of the causal intervention and the objective function in Appendix A.2 of the revised version.\\\\\\nBriefly, our goal is to maximize the probability density $P(Y|do(X))$, where do-calculus is used to intervene in the confounding factors affecting the observed sample $X$. To facilitate the calculation of confounding $P(C)$ in the environment, we introduce variational inference to approximate the actual posterior distribution of latent variables. We propose an additional context encoder $Q(C|X)$ to estimate $C$. To achieve continuous learning capability, we aim to obtain the optimal prior $P(C)$ that can satisfy the training needs of different environments inspired by research related to priors in the field of variational inference. In practice, we subtly take the characteristic distribution of confounding factors as the optimization goal, providing the objective function for multiple environments and its formula transformations (Eq.2, Eq.4-6, and Eq.18). Furthermore, by employing limit approximation and increasing information entropy, we can derive the formula Eq.8 for calculating the new component. Based on the bi-convexity property, we sequentially update the pseudo feature $U_K$ of the newly added component and its corresponding coefficient $\\\\alpha_{K-1}$.\", \"q3\": \"Is there any plan to test the generalization ability of C2INet on larger or more diverse datasets, including scenarios with real-time prediction in live driving conditions? How might the model perform in unseen or significantly different environments?\\\\\", \"a3\": \"Thank you for noting this. Our research is aimed at addressing practical application needs and will continue to incorporate data from real-world scenarios to validate our methods. Here, we supplement the Apollo[1] dataset for validation, comparing their performance across different methods. The Apollo trajectory prediction dataset comprises 53 one-minute vehicle trajectory segments, each with a sampling rate of 2 frames per second. The categories of agents include small vehicles, big vehicles, pedestrians, motorcyclists, bicyclists, and others. Based on the number of agents present in each segment, the dataset can be divided into five scenarios:\\\\\\nScene 1 (Very few targets): [0, 40), a total of 3 segments\\\\\\nScene 2 (Few targets): [40, 70), a total of 15 segments\\\\\\nScene 3 (Moderate number of targets): [70, 100), a total of 14 segments\\\\\\nScene 4 (Many targets): [100, 140), a total of 13 segments\\\\\\nScene 5 (Very many targets): [140, +\\\\infty), a total of 8 segments\\\\\\nThe experimental results are shown in the table below.\"}", "{\"title\": \"Reply to Reviewer Nzn5 Part1\", \"comment\": \"Thank you for reviewing and evaluating our work. We are especially grateful for your recognition of our Innovative Design, Adaptability, and Experiments. Overall, we have made additions in accordance with your requests, including supplementary mathematical derivations, computational resource analysis, and the addition of new experimental datasets.\\nNext, we will specifically address the several concerns you have raised:\", \"q1\": \"You have not provided detailed mathematical derivations for the causal intervention section. Could you elaborate further or reference additional causal inference theories to support the method? Specifically, how do you ensure that the intervention effectively removes confounding factors, and have you considered potential causal relationships between different tasks?\\\\\", \"a1\": \"We apologize for the confusion caused by the previously incomplete derivation details. We have supplemented the appendix A.2 with the derivations of the causal intervention in section 2.3 and the objective function in section 3. Overall, we derive $P(Y|do(X))$, the conditional probability relationship between the confounding factor $C$ and the observed variable $X$, based on two rules of do-calculus: \\\\emph{Action/observation exchange} and \\\\emph{Insertion/deletion of actions}. Essentially, the above debiasing process aims to eliminate spurious correlations. This is accomplished by cutting off the path $C \\\\rightarrow X$ and establishing the direct relationship $X \\\\rightarrow Z \\\\rightarrow Y$. However, environmental influences are often inaccessible in real-world scenarios, and collecting comprehensive trajectory data is costly and complex. The crux of accomplishing debiasing is to obtain an accurate distribution of the confounding factors $P(C)$, which we achieve here through the posterior estimators of samples for precise approximation. The visualized experimental results in Section 5.6 confirm our ability to accurately obtain separable environmental priors for each task. As you mentioned, even with different tasks, causal relationships may exist. We obtain the priors of continuously introduced different tasks and their reasonable weights through optimization. This process ensures that the aggregated priors via the encoder can serve as an effective feature space of confounders for the debiasing process during training.\", \"q2\": \"What are the computational complexity and storage requirements of C2INet? How do you ensure real-time performance in hardware-constrained environments (e.g., embedded systems or mobile devices)? Is there a quantitative analysis of resource consumption, or have you tried to optimize the algorithm to reduce computational load?\\\\\", \"a2\": \"Thank you for your constructive comments. We have added an analysis of the model's computational and storage requirements. Our C2INet, to adapt to continuous training scenarios, offers both online and offline training strategies. The online mode is more suitable for real-world applications, enabling rapid training updates. During the training process for each scenario, after a set number of batch rounds, a new component's pseudo feature set is trained and optimized according to Eq.8 of the revised paper, followed by iterative optimization of its optimal weights using Eq.9. Notably, the above process is interspersed within the regular training of the model (it can even be carried out in parallel, a fact we will verify in the future), with the additional computational resource cost being only the training of the component. The storage requirement is mainly used to store the pseudo feature set in the memory module. The offline mode involves unified local offline training after collecting a large amount of data, with the volume of parameter optimization being a multiple related to the memory capacity compared to the online mode. However, the computational load is also utterly acceptable in real-world applications due to the limited queue capacity. \\\\\\nBelow, we have documented the costs incurred by backbone and C2INet, including training duration, Flops, number of parameters, and storage amount. The inference process incurs no additional overhead. It can be seen that C2INet does not bring significant additional overhead to the regular training of the Backbone model, whether in terms of computation or storage.\\n| Index | STGAT\\uff08Backbone\\uff09| C2INet |\\n| -------- | -------- | -------- |\\n| Time | 5938ms/Epoch | 2.98ms/Epoch | \\n| Flops | 74.73 M/Epoch | 7056/Epoch |\\n| Parameters | 4234 | 3664 |\\n| Storage | 402192B | 34B |\"}", "{\"title\": \"Reply to Reviewer Nzn5 Part3\", \"comment\": \"Q6: Given that C2INet includes multiple complex modules (e.g., memory module and causal intervention), how interpretable is the model? Do you plan to provide more intuitive visualizations or explanations to demonstrate the model\\u2019s decision-making process?\\\\\", \"a6\": \"Thank you for highlighting the issues above. We have optimized the model framework diagram in Fig.1 of the initial version of the paper, which will be presented in the revised Fig.2 (to be uploaded shortly). In the upper part of the figure, we describe the optimization process of Causal Intervention, which introduces environment-related priors through variational inference and cuts off the influence of spurious features on the correlation with trajectory data. However, the debiasing capability of the past becomes ineffective with changes in the environment. To counteract catastrophic forgetting, we seamlessly integrate the proposed memory module with the causal intervention mechanism and ensure low resource occupancy through a pruning strategy. This allows the environmental context encoding to always obtain a continuously optimized confounding prior through multiple pseudo features. Our method is primarily used to enhance training effectiveness. For inference, it is achieved directly through the normal trajectory decoding process.\\n\\nQ7\\uff1aSince the paper has already discussed the situation of hardware resource constraints, has it considered other optimization methods such as hardware acceleration or knowledge distillation? What advantages does the proposed optimization method have compared to these alternative approaches?\\\\\\nA7\\uff1aThe issue you raised is precisely what we have considered. We have considered several aspects of enhancing overall efficiency during the training process. \\\\\\nFirstly, we provide dynamic memory queue capacity, allowing dynamic control of storage resource consumption based on changes in actual scenarios. We have tested queue capacity in Appendix A.4.4.\\\\\\nSecondly, we have considered both online and offline versions for training pseudo features. The online mode updates only one component and its weight value at a time, which significantly improves computation speed and is more suitable for edge devices. We have analyzed the FLOPs and the number of parameters saved for each training instance in Q2. Indeed, our prior intervention enhances the training effect of the trajectory prediction model as a plug-in and supports various hardware acceleration methods. As an additional constraint, it can even be conducted in parallel with the primary model's training process, significantly improving its efficiency (related experimental results will be released in the future). At the same time, we implement hardware resource optimization using an information gain-based pruning strategy.\\\\\\nThird is the generation frequency (the number of batch intervals between two pseudo features training) which affects training speed. In the original article, we set this to be fixed. To verify whether the generation frequency affects the results, we test it on the ETH-UCY dataset. We control the number of epochs for each task to be 300, with a memory queue capacity of 40. It is observed that the expansion rate of components does not significantly affect model performance. However, the model performs better at an appropriate expansion rate. Therefore, a comprehensive consideration of the balance between training efficiency and model performance should be taken into account in practical applications.\\\\\\n\\n|Generation Frequency | Univ | eth | zara1 | zara2 | hotel |\\n| -------- | -------- | -------- | -------- | -------- | -------- |\\n|4 | 0.558/1.217 | 0.771/1.724 | 0.581/1.244 | 0.535/1.175 | 1.029/2.248|\\n|6| 0.569/1.228 | 0.781/1.714 | 0.581/1.214 | 0.525/1.115 | 0.978/1.947|\\n|8| 0.528/1.187 | 0.762/1.686 | 0.551/1.193 | 0.485/1.065 | 0.919/1.898|\\n|10| 0.538/1.197 | 0.769/1.704 | 0.560/1.203 | 0.495/1.095| 0.929/1.908|\\n|12|0.548/1.207|0.774/1.708|0.561/1.223|0.499/1.105|0.939/1.934|\\n\\nAs you mentioned, traditional methods such as knowledge distillation cannot effectively solve the problem of catastrophic forgetting under continual learning, which is precisely what we are dedicated to resolving.\\n\\nReference\\\\\\n[1]Ma, Yuexin, et al. \\\"Trafficpredict: Trajectory prediction for heterogeneous traffic-agents.\\\" AAAI. 2019.\"}", "{\"summary\": \"The paper introduces C2INet, a novel model designed to enhance multi-agent trajectory prediction in dynamic environments by addressing key challenges like environmental bias and catastrophic forgetting. C2INet incorporates a prior-aware memory module that stores optimal priors across tasks, enabling it to adapt incrementally to new scenarios while retaining performance on past tasks. A core component of the model is the continual causal intervention mechanism, which aims to disentangle latent confounding factors that could negatively impact prediction accuracy.\\n\\nThe model's design emphasizes a balance between computational efficiency and performance, employing variational inference to align environment-dependent priors with posterior estimates, thus maintaining robustness against latent biases. The inclusion of a pseudo-feature-based pruning strategy ensures the memory module remains efficient and manageable even as task volume increases. C2INet's framework is evaluated on datasets such as ETH-UCY and Stanford Drone, showcasing its strong adaptability and significant improvements in prediction metrics like Average Displacement Error (ADE) and Final Displacement Error (FDE) when compared to traditional trajectory prediction models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a novel method combining causal intervention with a continual learning framework, effectively addressing the problem of bias and forgetting in multi-task learning.\\n\\nThe experimental analysis demonstrates that C2INet performs robustly on multiple datasets, achieving significant improvements in trajectory prediction accuracy compared to existing methods.\\n\\nThe integration of the memory module and prior queue is well-motivated and effectively implemented to handle changing environments, showcasing the potential for real-world applications.\", \"weaknesses\": \"Although the paper introduces causal intervention comprehensively, some theoretical explanations, particularly regarding the optimization of KL divergence and multi-task prior adjustments, could benefit from simplification. This would make the paper more accessible to a broader audience.\\n\\nThe experiments are thorough but somewhat limited in terms of application diversity. The paper could be strengthened by including analyses on more varied or complex real-world scenarios, such as real-time predictions in live driving conditions.\\n\\nCertain sections, such as the derivation of equations and framework details, are presented in a complex manner that may challenge the reader's understanding. A clearer and more concise explanation would enhance readability.\\n\\nWhile the paper mentions addressing environmental biases, there is insufficient analysis of other types of potential biases in the dataset, such as sample distribution imbalance and long-tail effects. The paper does not delve deeply into how the model performs in broader contexts.\", \"questions\": \"Could the authors elaborate on how the proposed C2INet model could be implemented in practical real-world systems, especially where computational resources may be limited (e.g., embedded or low-power devices)?\\n\\nCan the authors provide a more simplified explanation of the theoretical basis, particularly the optimization of KL divergence and the adjustment of priors in multi-task scenarios? This would help make the paper more accessible to a broader audience.\\n\\nIs there any plan to test the generalization ability of C2INet on larger or more diverse datasets, including scenarios with real-time prediction in live driving conditions? How might the model perform in unseen or significantly different environments?\\n\\nWhile the paper discusses mitigating environmental bias, how does C2INet handle other types of data biases, such as sample distribution imbalances or long-tail effects? Would these impact the model's performance or lead to biased predictions?\\n\\nCould the authors provide more details about the computational complexity of C2INet, such as the number of parameters and resource usage? This would clarify the scalability and practicality of the model for large-scale or real-time applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer Bhsa Part2\", \"comment\": \"(continue)\\n|Index | STGAT | STGCNN | GCRL-STGAT | GCRL-STGCNN |C$^{2}$Inet-STGAT|C$^{2}$Inet-STGCNN|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n|Scene 1| 5.961/10.094| 6.261/10.699 | **5.737/9.694** | **6.04/10.32** |6.136/10.319 | 6.088/10.240 |\\n|:Scene 2 | 6.018/10.193 | 6.430/10.854 | 5.247/9.09 | 5.001/8.43 |**2.840/4.822** |**2.928/4.47**|\\n|:Scene 3 | 3.478/5.517 | 7.332/12.749 | 2.408/4.116 | 3.949/6.951 | **2.223/3.790** |**2.181/3.780**|\\n|:Scene 4 | 2.823/4.572 | 6.470/11.053 | 2.607/4.292 | 3.327/5.672 | **2.445/4.082** |**2.201/3.812**|\\n|:Scene 5 | 2.906/4.881 | 4.277/7.486 | 2.673/4.613 | 3.007/5.075 | **2.590/4.415** |**2.116/3.592**|\\n\\nIt can be discerned that STGAT's representational capacity is superior to that of STGCNN. Particularly, STGCNN demonstrates inferior performance as the task duration increases. GCRL, with the introduction of causal intervention strategies, can enhance the model's capabilities in various contexts, but it does not achieve optimal performance during prolonged training. C2INet effectively improves training stability, with the model's performance remaining stable from the second scenario onwards.\", \"q4\": \"While the paper discusses mitigating environmental bias, how does C2INet handle other types of data biases, such as sample distribution imbalances or long-tail effects? Would these impact the model's performance or lead to biased predictions?\\\\\", \"a4\": \"As you mentioned, the long-tail effect and class imbalance issues significantly impact the performance of trajectory prediction models. Indeed, our method can mitigate these issues to some extent.\\\\\\nFirst, our training process iterates sequentially across different scenarios, where the class imbalance is naturally part of the confounding factors. In each scenario, various long-tail distribution issues are prevalent, such as feature-level imbalance highlighted by Fend[2], and trajectory length imbalance pointed out by FlexiLength[3]. Ablation experiments in Appendix A.4.3 demonstrate that the causal intervention mechanism can improve model performance, which also addresses the long-tail distribution issue. In the future, we will further investigate the impact of long-tail distribution issues on environmental confounding. Secondly, our offline mode obtains the category distribution of sample features through pre-clustering at the beginning of task training, while the online mode gradually forms category clusters. Both of these methods effectively resolve class imbalance, as visualized in the confounding prior in section 5.6.\\n\\nQ5\\uff1aCould the authors provide more details about the computational complexity of C2INet, such as the number of parameters and resource usage? This would clarify the scalability and practicality of the model for large-scale or real-time applications.\\\\\", \"a5\": \"Thank you for pointing out considerations regarding practical applications. Our C2INet offers both online and offline training modes, with the online strategy being more suitable for real-world application scenarios, enabling rapid training. In each new scenario, after a set number of training batch rounds, a new component's pseudo feature set is added and optimized according to Eq.8 in the revised version, followed by iterative optimization of its optimal weights using Eq.9. Notably, the above process is interspersed within the model's regular training routine, with the additional computational resource cost being only the training of the component. The storage cost is the retention of the obtained pseudo feature set in the memory module.\\\\\\nBelow, we have recorded the normal training and the additional overhead brought by CI2Net, including training time, Flops, number of parameters, and storage amount. There is no additional overhead during the inference process. C2INet does not impose a significant additional resource burden on the training of the existing model.\\n\\nReference\\\\\\n[1]\\tMa, Yuexin, et al. \\\"Trafficpredict: Trajectory prediction for heterogeneous traffic-agents.\\\" AAAI. 2019.\\\\\\n[2]\\tWang, Yuning, et al. \\\"Fend: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction.\\\" CVPR. 2023.\\\\\\n[3]\\tXu, Yi, and Yun Fu. \\\"Adapting to length shift: Flexilength network for trajectory prediction.\\\" CVPR. 2024.\"}", "{\"title\": \"Reply to Reviewer zjLH Part2\", \"comment\": \"(continue)\\n\\nETH-UCY Dataset|STAR|GCRL-STAR|C^{2}Inet-STAR|\\n| -------- | -------- | -------- | -------- |\\nuniv|1.114/1.967|0.832/1.626 | 0.608/1.257|\\n:eth |1.241/ 2.135| 1.065/2.089| 0.819/1.644|\\n:zara1 |0.896/1.726 | 0.878/1.745| 0.572/1.227|\\n:zara2 |0.671/1.313 | 0.637/1.290| 0.494/1.062|\\n:hotel |1.587/2.963 | 0.974/1.971 | 0.510/1.081 | \\n\\nSynthetic Dataset |STAR | GCRL-STAR | C^{2}Inet-STAR | \\n| -------- | -------- | -------- | -------- |\\n0.1 | 0.069/0.104|0.059/0.104 | 0.0473/0.079 |\\n:0.2 |0.085/ 0.144|0.078/0.137| 0.0743/0.127 |\\n:0.3 | 0.124/0.195 | 0.114/0.185| 0.1010/0.173|\\n:0.4 |0.157/0.249| 0.145/0.238| 0.1319/0.226 |\\n:0.5|0.184/0.308|0.177/0.299| 0.1663/0.281 | \\n:0.6 |0.225/0.367|0.218/0.359| 0.2062/0.346 | \\n\\nSDD Dataset| STAR | GCRL-STAR | C^{2}Inet-STAR | \\n| -------- | -------- | -------- | -------- |\\nbookstore | 76.236/140.283|75.901/139.901| 75.945/139.768|\\n: coupa | 54.516/101.043| 54.352/101.067| 52.976/98.020|\\n: deathcircle| 117.198/212.099|116.429/210.616|115.2337/208.1377 |\\n: gates| 97.478/176.860| 96.964/175.811| 94.3827/170.6680 |\\n: hyang| 96.455/176.333|96.146/175.680| 91.3278/166.3408 | \\n: nexus| 87.426/160.058|87.099/159.347| 82.2640/151.4751 | \\n: little| 88.873/163.186| 88.531/162.492| 82.5079/153.4227 | \\n: quad| 88.470/162.514| 88.028/161.603| 80.5028/147.8559 |\", \"q5\": \"The results in terms of ADE/FDE on the SDD dataset are too high and seem strange. All results are above 50.0. Do you use a particular coordinate system instead of standard pixel coordinates?\\\\\", \"a5\": \"Thank you for paying attention to the details of the experimental result. In fact, we have validated our approach on the SDD dataset using pixel coordinates consistent with other studies, selecting the scene sequence 'Bookstore', 'Coupa', 'Deathcircle', 'Gates', 'Hyang', 'Nexus', 'Little', 'Quad' for continual learning, and focusing on targets such as 'Cart', 'Biker', 'Pedestrian', 'Skater', 'Bus'. We are the first to develop a method that trained on the SDD dataset in a continual learning setting for each scene. Specifically, we sequentially train and test on different scenes in the SDD dataset, with the model accessing only the data samples from the current scene each time. Table 1 in section 5.2 compares the average performance for current and previously completed tasks under continual learning. The experimental results are worse than those obtained with traditional experimental setups, but this is not unreasonable; for example, STGAT can achieve 31.19 ADE under standard settings [14], and in the worst case, it can reach 65.82 ADE in the FEND [15] scenario. Under our experimental setup, it can achieve more than 70 ADE. This is primarily due to insufficient training and catastrophic forgetting when training under different scenes, precisely the issue we aim to address. Of course, the leading advantage of C2INet on the SDD dataset is not clear enough and may not prominently demonstrate our method's advantages. However, it can be observed that many methods fail. To fully illustrate our approach's advantages, we have selected the real-world dataset Apollo dataset[8]. We will supplement the explanation of the results here once all the experimental data have been obtained.\"}", "{\"title\": \"Reply to all\", \"comment\": \"We appreciate the valuable comments from the area chairs and the reviewers. Overall, we have addressed and made adjustments according to the issues you have raised as follows:\\\\\\n-We have supplemented the experimental validation, including additional datasets and backbone.\\\\\\n-We have expanded on the details of the methods, including the analysis and mathematical derivation of causal inference and optimization objectives.\\\\\\n-We have added an analysis of computational and storage resources, demonstrating the feasibility of practical applications.\\\\\\nThe related modifications have been incorporated into the revised version and are highlighted in blue. Additionally, we have responded to all questions and welcome ongoing discussions with you.\"}", "{\"metareview\": \"This paper proposes a model for multiagent trajectory prediction. The key innovation centers around causal intervention for continuous trajectory prediction in dynamic environments.\\n\\nThe biggest criticism that is common across all the reviews is lack of clarity in exposition. Most of the reviewers ask for simplified presentation and crisp descriptions. Additionally there are questions about the experimental evaluation.\\n\\nMy recommendation is based upon the fact that we really need to see a whole new manuscript to see if it really addresses the presentation concerns.\", \"additional_comments_on_reviewer_discussion\": \"As mentioned above there are common themes across the reviewers.\"}", "{\"title\": \"Reply to Reviewer zjLH Part3\", \"comment\": \"Q6: Related works are not sufficient. More recent works from the past two years should be incorporated, such as diffusion models and works with new settings.\\\\\", \"a6\": \"Thank you for pointing out these related works. We have supplemented the revised version with descriptions of diffusion models [9][10][11] and works with new settings [12][13] in Appendix A.3.1. These excellent works have also inspired us greatly.\\n\\nReference\\\\\\n[1] Chen, Guangyi, et al. \\\"Human trajectory prediction via counterfactual analysis.\\\" ICCV. 2021.\\\\\\n[2] Liu, Yuejiang, et al. \\\"Towards robust and adaptive motion forecasting: A causal representation perspective.\\\" CVPR. 2022.\\\\\\n[3] Bagi, Shayan Shirahmad Gale, et al. \\\"Generative causal representation learning for out-of-distribution motion forecasting.\\\" ICML. PMLR, 2023.\\\\\\n[4] Mangalam, Karttikeya, et al. \\\"It is not the journey but the destination: Endpoint conditioned trajectory prediction.\\\" ECCV. 2020.\\\\\\n[5] Pourkeshavarz, Mozhgan, Junrui Zhang, and Amir Rasouli. \\\"CaDeT: a Causal Disentanglement Approach for Robust Trajectory Prediction in Autonomous Driving.\\\"CVPR. 2024.\\\\\\n[6] Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In The World Wide Web Conference, 2020.\\\\\\n[7] Yu, Cunjun, et al. \\\"Spatio-temporal graph transformer networks for pedestrian trajectory prediction.\\\" ECCV. 2020.\\\\\\n[8] Ma, Yuexin, et al. \\\"Trafficpredict: Trajectory prediction for heterogeneous traffic-agents.\\\" AAAI. Vol. 33. No. 01. 2019.\\\\\\n[9] Gu, Tianpei, et al. \\\"Stochastic trajectory prediction via motion indeterminacy diffusion.\\\" CVPR 2022.\\\\\\n[10] Li, Rongqing, et al. \\\"Bcdiff: Bidirectional consistent diffusion for instantaneous trajectory prediction.\\\" NeurIPS. 2023.\\\\\\n[11] Bae, Inhwan, Young-Jae Park, and Hae-Gon Jeon. \\\"SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model.\\\" CVPR. 2024.\\\\\\n[12] Li, Rongqing, et al. \\\"ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving.\\\" SIGKDD. 2024.\\\\\\n[13] Xu, Yi, and Yun Fu. \\\"Adapting to length shift: Flexilength network for trajectory prediction.\\\" CVPR. 2024.\\\\\\n[14] Xu, Chenxin, et al. \\\"Groupnet: Multiscale hypergraph neural networks for trajectory prediction with relational reasoning.\\\" CVPR. 2022.\\\\\\n[15] Wang, Yuning, et al. \\\"Fend: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction.\\\" CVPR. 2023.\"}", "{\"title\": \"Reply to Reviewer Nzn5 Part2\", \"comment\": \"Q3: As the number of tasks increases, will the storage requirements for the prior memory module become unmanageable? How do you manage priorities or compress the memory module effectively when storage space is limited? Have you considered the issue of memory overflow as the number of tasks continues to grow?\\\\\", \"a3\": \"Thank you for raising this important issue. Storage requirements are an essential consideration for edge devices; hence, in the article, we thoroughly considered how to maintain component expansion within a controllable range while preserving maximal diversity. For specifics, you can refer to Page 6, Line 316 of the initial version of the paper. Based on the set memory capacity, we prune components with similar information; the pruning strategy involves calculating the two most similar prior components by the similarity\\u00a0$S(\\\\cdot,\\\\cdot)$ and removing the one that contributes the least to the overall information content. The related process can also be found in Line 21 of Algorithm 1 in Appendix A.1: pruning is performed when the queue length exceeds $\\\\gamma$. We hope the above can address your concerns.\", \"q4\": \"C2INet involves several key hyperparameters (e.g., the KL divergence adjustment coefficient and weights in the memory module). Could you provide insights on how to tune these hyperparameters? Have you considered an adaptive hyperparameter optimization mechanism to reduce the reliance on manual tuning?\\\\\", \"a4\": \"Thank you for your question. It may be that our method description is not detailed enough. We have added the derivation process in the revised appendix to address this issue. Moreover, the training optimization process can be found in the algorithm description in Appendix A.1. The only parameter that requires manual adjustment throughout the training process is the capacity of the memory module, $\\\\gamma$, for which we have designed experiments and provided explanations in Appendix A4.4.\\\\\\nThe several vital hyperparameters you mentioned can be involved in the optimization process. Briefly, the overall training process is conducted through a min-max mechanism. In the minimization step, building on the convex property of Eq.5, we alternately optimize two variables: $P_{K}(C)$, representing the shift in the prior probability density function, and $\\\\alpha_k$, which adjusts the scaling factor. The first optimization aims to bring the prior closer to the aggregated posterior probability while maintaining as much inconsistency as possible with the content $M_{\\\\leq K-1}(C)$\\u00a0(Eq.6) already obtained in the queue. Once the prior $P_{K}(C)$ is determined and fixed, the weights of the components can be optimized according to application requirements either online (Line 290) or offline (Line 303) modes.\", \"q5\": \"The datasets used are mainly focused on pedestrian and vehicle trajectory prediction, which is somewhat limited in scope. Do you plan to test C2INet in more diverse and complex scenarios to evaluate its applicability and generalizability?\\\\\", \"a5\": \"Thank you for pointing out the issues above. We have supplemented with the real-world driving scenario dataset Apollo[1] for validation. The Apollo trajectory prediction dataset comprises 53 one-minute vehicle trajectory segments, each with a sampling rate of 2 frames per second. The categories of agents include small vehicles, big vehicles, pedestrians, motorcyclists, bicyclists, and others. Based on the number of agents present in each segment, the dataset can be divided into five scenarios:\\\\\\nScene 1 (Very few targets): [0, 40), 3 segments\\\\\\nScene 2 (Few targets): [40, 70), 15 segments\\\\\\nScene 3 (Moderate number of targets): [70, 100), 14 segments\\\\\\nScene 4 (Many targets): [100, 140), 13 segments\\\\\\nScene 5 (Very many targets): [140, +\\\\infty), 8 segments\\\\\\nThe experimental results are shown in the table below.\\n|Index | STGAT | STGCNN | GCRL-STGAT | GCRL-STGCNN |C$^{2}$Inet-STGAT|C$^{2}$Inet-STGCNN|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n|Scene 1| 5.961/10.094| 6.261/10.699 | **5.737/9.694** | **6.04/10.32** |6.136/10.319 | 6.088/10.240 |\\n|:Scene 2 | 6.018/10.193 | 6.430/10.854 | 5.247/9.09 | 5.001/8.43 |**2.840/4.822** |**2.928/4.47**|\\n|:Scene 3 | 3.478/5.517 | 7.332/12.749 | 2.408/4.116 | 3.949/6.951 | **2.223/3.790** |**2.181/3.780**|\\n|:Scene 4 | 2.823/4.572 | 6.470/11.053 | 2.607/4.292 | 3.327/5.672 | **2.445/4.082** |**2.201/3.812**|\\n|:Scene 5 | 2.906/4.881 | 4.277/7.486 | 2.673/4.613 | 3.007/5.075 | **2.590/4.415** |**2.116/3.592**|\\n\\nIt can be discerned that STGAT's representational capacity is superior to that of STGCNN. Particularly, STGCNN demonstrates inferior performance as the task duration increases. GCRL, with the introduction of causal intervention, can enhance the model's capabilities in various contexts, but it does not achieve optimal performance during prolonged training. C2INet effectively improves training stability, with the model's performance remaining stable from the second scenario onwards.\"}", "{\"summary\": \"This paper proposes $C^2INet$ for trajectory prediction, introducing causal intervention to enable continuous trajectory prediction across evolving domains. To address the issue of catastrophic forgetting, they introduce a Continuity Memory Module. Experiments on three datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper researches an interesting and critical problem in trajectory prediction.\\n2. The organization of the paper is somewhat reasonable.\", \"weaknesses\": \"Some format issue should be considered:\\n1. The citation format is wrongly used. \\n2. the vertical space is not proper set, especially on Page 4\\n3. Font size of Table 1 is too small, which is weird and does not match the main text\", \"some_weaknesses_about_the_contents\": \"1. I\\u2019m so confused about the motivation of the work, especially the necessity of introducing causal intervention into the trajectory prediction task.\\n\\n2. Existing continual learning methods also take into account the catastrophic forgetting problem by utilizing the experience replay mechanism. What are the differences between those CL methods and $C^2INet$? Can your method solve some critical problems that traditional CL methods cannot?\\n\\n3. The authors use STGAT and STGCNN as backbones. They were proposed in 2016 and 2019, which are now outdated. The method should be integrated with cutting-edge backbones proposed in 2023 and 2024.\\n\\n4. The results in terms of ADE/FDE on the SDD dataset are too high and seem strange. All results are above 50.0. Do you use a particular coordinate system instead of standard pixel coordinates?\\n\\n5. Related works are not sufficient. More recent works from the past two years should be incorporated, such as diffusion models [1][2][3] and works with new settings [4][5].\\n\\n[1]Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion\\n\\n[2]BCDiff: Bidirectional Consistent Diffusion for Instantaneous Trajectory Prediction\\n\\n[3]Universal Trajectory Predictor Using Diffusion Model\\n\\n[4]ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving\\n\\n[5]Adapting to Length Shift: FlexiLength Network for Trajectory Prediction\", \"questions\": \"I\\u2019m not familiar with causal intervention, so I may not provide professional reviews on the technical part.\\nQuestions for other parts, see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer zjLH Part1\", \"comment\": \"Thanks for your careful and valuable comments. We will explain the concerns point by point.\", \"q1\": \"Some format issue should be considered.\\\\\", \"a1\": \"Thank you for pointing this out. Due to page length constraints, some text formats need to be adjusted. We will certainly place greater emphasis on formatting norms. The revised version has corrected issues, including citation format and font size.\", \"q2\": \"I\\u2019m so confused about the motivation of the work, especially the necessity of introducing causal intervention into the trajectory prediction task.\\\\\", \"a2\": \"I am pleased to further elaborate on our motivation. The generalizability of trajectory prediction tasks is an issue that the community is increasingly focusing on, aiming to achieve a universal prediction model applicable in various practical environments. Causal intervention techniques have been widely used in recent years to eliminate the confounding bias in predictions, enabling trajectory prediction models to learn meaningful features that do not rely on spurious correlations. Some studies have already embarked on this exploration[1][2][3]. However, existing works generally lack consideration for ensuring the continued effectiveness of debiasing mechanisms as causal associations change with the environment. It is also validated in our experiments in section 5.3, indicating that with changes in the environment, the predictive outcomes in past scenarios may decline, and addressing catastrophic forgetting as well as environmental adaptability is a strong starting point for our work.\", \"q3\": \"Existing continual learning methods also take into account the catastrophic forgetting problem by utilizing the experience replay mechanism. What are the differences between those CL methods and C2INet? Can your method solve some critical problems that traditional CL methods cannot?\\\\\", \"a3\": \"As you mentioned, continual learning addresses the problem of catastrophic forgetting. Our proposed C2INet preserves the prior information $P(C)$ of continuously changing environments to maintain a long-term ability to eliminate confounding biases originating from the \\\"Prior-Aware\\\" nomenclature. We achieve the memorization of priors through posterior aggregation based on a carefully designed causal intervention structure, which can be referred to in Eq.6-8. The memory of environment-related correlation factors is achieved through optimizable pseudo feature $U$.\\\\\\nWe also compare our approach with typical continual learning methods (using STGAT as the backbone), including Elastic Weight Consolidation (EWC), which applies gradient constraints; latent features modeled as a mixture of Gaussians with diagonal covariance (MoG); and the random coresets method (Coresets), which uses memory replay during training. Our analysis demonstrates that our approach effectively captures high-quality changes in environmental content, as detailed in section 5.3. Traditional CL methods are indeed effective, but C2INet efficiently identifies the impact of intrinsic factors on trajectory representation. It cleverly integrates continual learning strategies to ensure memory and adaptability under changing training environments. C2INet shows improvements in motivation, technical design, and experimental outcomes over traditional CL methods.\", \"q4\": \"The authors use STGAT and STGCNN as backbones. They were proposed in 2016 and 2019, which are now outdated. The method should be integrated with cutting-edge backbones proposed in 2023 and 2024.\\\\\", \"a4\": \"Thank you for pointing this out. Classic never goes out of style. STGAT and STGCNN, as classic RNN-based and CNN-based trajectory representation methods, can simply and effectively demonstrate that our plugin structure can enhance their expressive performance. In fact, other works related to trajectory prediction generalization also adopt the same experimental design to eliminate additional interference, such as GCRL[3] using STGAT as the backbone, [2] using STGAT and PECNET[4], CaDeT[5] using Graph Transformer[6].\\\\\\nWe also acknowledge your concerns, and therefore, we have included STAR[7] as a backbone for validation on ETH-UCY Dataset, Synthetic Dataset and SDD Dataset (The detailed data results are presented in the table of Part 2) . The reason for not choosing the latest backbone is that the mechanisms in recent years' work are generally complex, making it puzzling to test our plug-and-play capabilities with a basic model. Therefore, we opted for the newer STAR model, which has advantages in handling relationships between agents by Spatio-Temporal Graph Transformer Networks.\\\\\\nIt can be observed that when using STAR as the backbone, the performance is inconsistent across various datasets, with a notably significant decline on the ETH-UCY dataset. The tested C$^2$INet (trained in online mode) indeed greatly enhances the model's training robustness in different scenarios. More importantly, under the setting of continual learning, the model can efficiently achieve effective representation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The study introduces C2INet, an innovative method for multi-agent trajectory prediction in intricate settings that utilizes ongoing causal intervention. C2INet integrates causal intervention and continuous learning inside a memory-enhanced framework to tackle challenges such as environmental bias, catastrophic forgetting, and hardware limitations in real-time multi-agent prediction. This method use variational inference to synchronize environment-related priors with a posterior estimator, guaranteeing precise trajectory representation by addressing confounding variables in the latent space.\\n\\nC2INet's principal innovation is its ongoing learning process, which progressively adapts to new circumstances while maintaining performance on previously encountered ones. This is accomplished via a memory module that retains ideal priors across situations, therefore safeguarding essential knowledge and reducing overfitting via a pruning process. Comprehensive assessments of real-world datasets, including ETH-UCY and Stanford Drone, alongside synthetic datasets, reveal that C2INet surpasses conventional methods in predictive accuracy and resistance to task interference, attaining notable enhancements in metrics such as ADE and FDE across multiple tasks.\\n\\nC2INet effectively mitigates significant shortcomings in current trajectory prediction models by integrating causal intervention with a continuous memory framework, hence guaranteeing strong performance in dynamic, multi-agent settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Adaptability to Multiple Scenarios:\\nBy leveraging continual causal intervention, C2INet effectively handles confounding factors in diverse scenarios and retains information from previous tasks. This design enhances the model's adaptability to dynamic environments, making it suitable for multi-agent trajectory prediction in complex settings, such as autonomous driving and crowd monitoring.\\n\\n2. Comprehensive Experimental Validation:\\nThe paper provides extensive validation across multiple datasets (ETH-UCY, Stanford Drone, synthetic datasets) and compares C2INet with various baseline methods, including common causal intervention and continual learning approaches. The results demonstrate that C2INet outperforms traditional methods in key metrics (e.g., ADE and FDE), proving its effectiveness in handling catastrophic forgetting and improving prediction accuracy.\\n\\n3. Modular Design:\\nC2INet\\u2019s design is modular, making it compatible with multiple baseline models (e.g., STGAT, SocialSTGCNN). This plug-and-play characteristic increases the flexibility of the approach, allowing it to be used in various model architectures and promoting wider applicability.\", \"weaknesses\": \"1. Innovative Design:\\nThe paper introduces a novel model, C2INet, which combines causal intervention with continual learning, specifically using a memory module to retain optimal priors across different scenarios to mitigate catastrophic forgetting. This approach is relatively rare in multi-task learning and offers a degree of originality.\\n\\n2. Adaptability to Multiple Scenarios:\\nBy leveraging continual causal intervention, C2INet effectively handles confounding factors in diverse scenarios and retains information from previous tasks. This design enhances the model's adaptability to dynamic environments, making it suitable for multi-agent trajectory prediction in complex settings, such as autonomous driving and crowd monitoring.\\n\\n3. Potential Limitations of the Prior Memory Module:\\nAlthough the prior memory module helps alleviate catastrophic forgetting, it heavily relies on storing priors for different tasks, which may lead to challenges in memory management and capacity. As the number of tasks grows, the memory module might struggle to scale efficiently. Additionally, the paper does not discuss how to effectively manage priority or memory compression when storage space is limited.\\n\\n4. Limitations of the Experimental Datasets:\\nAlthough the paper uses multiple datasets, these are mainly focused on specific domains (e.g., pedestrian and vehicle trajectory prediction) and lack diversity. The generalizability of the experimental results to more complex or diverse dynamic environments is unclear, limiting the method\\u2019s applicability to real-world scenarios.\\n\\n5. Complex Hyperparameter Tuning with Limited Guidance:\\nC2INet involves multiple key hyperparameters (e.g., the KL divergence adjustment coefficient, weights in the memory module) that significantly affect model performance, but the paper does not provide detailed guidance on tuning them. The complexity of hyperparameter tuning, combined with a lack of explicit guidelines, may hinder other researchers from reproducing and applying the method in different settings.\", \"questions\": \"You have not provided detailed mathematical derivations for the causal intervention section. Could you elaborate further or reference additional causal inference theories to support the method? Specifically, how do you ensure that the intervention effectively removes confounding factors, and have you considered potential causal relationships between different tasks?\\n\\nWhat are the computational complexity and storage requirements of C2INet? How do you ensure real-time performance in hardware-constrained environments (e.g., embedded systems or mobile devices)? Is there a quantitative analysis of resource consumption, or have you tried to optimize the algorithm to reduce computational load?\\n\\nAs the number of tasks increases, will the storage requirements for the prior memory module become unmanageable? How do you manage priorities or compress the memory module effectively when storage space is limited? Have you considered the issue of memory overflow as the number of tasks continues to grow?\\n\\nC2INet involves several key hyperparameters (e.g., the KL divergence adjustment coefficient, weights in the memory module). Could you provide insights on how to tune these hyperparameters? Have you considered an adaptive hyperparameter optimization mechanism to reduce the reliance on manual tuning?\\n\\nThe datasets used are mainly focused on pedestrian and vehicle trajectory prediction, which is somewhat limited in scope. Do you plan to test C2INet in more diverse and complex scenarios to evaluate its applicability and generalizability?\\n\\nGiven that C2INet includes multiple complex modules (e.g., memory module and causal intervention), how interpretable is the model? Do you plan to provide more intuitive visualizations or explanations to demonstrate the model\\u2019s decision-making process?\\n\\nSince the paper has already discussed the situation of hardware resource constraints, has it considered other optimization methods such as hardware acceleration or knowledge distillation? What advantages does the proposed optimization method have compared to these alternative approaches?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5IkDAfabuo
Prioritized Generative Replay
[ "Renhao Wang", "Kevin Frans", "Pieter Abbeel", "Sergey Levine", "Alexei A Efros" ]
Sample-efficient online reinforcement learning often uses replay buffers to store experience for reuse when updating the value function. However, uniform replay is inefficient, since certain classes of transitions can be more relevant to learning. While prioritization of more useful samples is helpful, this strategy can also lead to overfitting, as useful samples are likely to be more rare. In this work, we instead propose a prioritized, parametric version of an agent's memory, using generative models to capture online experience. This paradigm enables (1) densification of past experience, with new generations that benefit from the generative model's generalization capacity and (2) guidance via a family of ``relevance functions'' that push these generations towards more useful parts of an agent's acquired history. We show this recipe can be instantiated using conditional diffusion models and simple relevance functions such as curiosity- or value-based metrics. Our approach consistently improves performance and sample efficiency in both state- and pixel-based domains. We expose the mechanisms underlying these gains, showing how guidance promotes diversity in our generated transitions and reduces overfitting. We also showcase how our approach can train policies with even higher update-to-data ratios than before, opening up avenues to better scale online RL agents.
[ "online learning", "model-based reinforcement learning", "generative modeling", "synthetic data", "continual learning" ]
Accept (Oral)
https://openreview.net/pdf?id=5IkDAfabuo
https://openreview.net/forum?id=5IkDAfabuo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x7yJBdtuyR", "vvXD1PxiuS", "vhb74otyaR", "tbCuYa8F2U", "nunkadufAS", "m39P5AwSxT", "lqSxLKWKee", "lb9QL0Kx6k", "knTLqMoytb", "iTXRe82jKN", "h0XUtHBAKU", "chgHFGitmw", "ZKMf2Q99if", "WqnW468nw4", "WHSrCaUIt7", "U7HDxNw0IK", "TbXbgbxE9q", "Lej7WxzrJV", "H19FIxmAiZ", "BB1f0dVd4V", "8YgCGZsZh5", "6TlKOTKvuP", "630CbVNihU", "5lL9BZSdnU", "4PKgx2c6cC", "1S19qp8Bea", "0WGaknUxQj" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1731999601342, 1732067731869, 1733169906981, 1732561449688, 1731998651206, 1732654647946, 1731999473108, 1732541515018, 1730689988449, 1730694412680, 1730845947261, 1732561342469, 1732561381686, 1732566214706, 1731998820070, 1731998996073, 1737523563664, 1732561419512, 1732067945332, 1730403208342, 1732043917164, 1731999339158, 1731997687741, 1731997578503, 1734739954744, 1733184865988, 1731999241345 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_ir9Q" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_8yrp" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_ir9Q" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_Yyu8" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_1k48" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_Yyu8" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_8yrp" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_1k48" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ], [ "ICLR.cc/2025/Conference/Submission3226/Area_Chair_s4Kk" ], [ "ICLR.cc/2025/Conference/Submission3226/Reviewer_1k48" ], [ "ICLR.cc/2025/Conference/Submission3226/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer 8yrp (2/2)\", \"comment\": \"**Q3:** \\u201cCould PGR be extended to offline RL settings? If so, what modifications would be necessary?\\u201d\\n\\n**A3:** **While parts of our method are definitely applicable to the offline setting (e.g. the idea of using conditional signals to guide generation towards pre-specified data characteristics), our problem setting is strictly in online RL.**\\n\\nWe believe this choice makes our contribution clearer \\u2014 namely, we show how widely-used prioritization mechanisms in online RL can be cast through the lens of conditional generation. In online RL, data is expensive to obtain (motivating generative densification) and relevant data is scarce (motivating generative guidance). Both these elements are directly addressed and integrated into the fabric of PGR.\\n\\nWe leave to future work to investigate how our insights and techniques can be further leveraged to the offline RL setting. One important modification is that with entire offline trajectories available, it may make sense to use relevance functions with an understanding of trajectory-level semantics, for improved guidance during densification.\\n\\n---\\n\\n**Q4:** \\u201cHow does PGR's performance compare against PER baselines which use approximate parametric models of prior experience?\\u201d\\n\\n**A4:** **PGR outperforms baselines which use parametric models of prior experience.**\\n\\nWe direct the reviewer to Figure 3a, and the accompanying discussion on lines 369-374 in Section 5.1. We compare PGR to two PER baselines: one which uses the learned Q-networks to estimate TD-error for prioritization, and the other which uses the learned ICM module to prioritize transitions based on intrinsic curiosity. This latter module has the exact same parameterization as the relevance function in the curiosity variant of PGR. As we observe in Figure 3a, PGR outperforms these baselines. For ease of quantitative comparison, we display these values in Table 9 of Appendix D, and reproduce the results here:\\n\\n| | Quadruped-Walk | Cheetah-Run | Reacher-Hard |\\n|-----------------|------------------------|------------------------|------------------------|\\n| REDQ | 496.75 $\\\\pm$ 151.00 | 606.86 $\\\\pm$ 99.77 | 733.54 $\\\\pm$ 79.66 |\\n| SynthER | 727.01 $\\\\pm$ 86.66 | 729.35 $\\\\pm$ 49.59 | 838.60 $\\\\pm$ 131.15 |\\n| PER (TD-Error) | 694.02 $\\\\pm$ 99.17 | 685.23 $\\\\pm$ 63.76 | 810.37 $\\\\pm$ 89.22 |\\n| PER (Curiosity) | 726.93 $\\\\pm$ 71.59 | 627.68 $\\\\pm$ 55.50 | 763.21 $\\\\pm$ 52.29 |\\n| PGR (Curiosity) | **927.98 $\\\\pm$ 25.18** | **817.36 $\\\\pm$ 35.93** | **915.21 $\\\\pm$ 48.24** |\\n\\n---\\n\\n**Q5:** \\u201cAre there any other relevance functions that have been tried out?\\u201d\\n\\n**A5:** In the main text, we present results using relevance functions defined via raw reward, value estimation, TD-error and intrinsic curiosity. Following the discussion period, we include additional results for conditioning on alternative intrinsic rewards such as RND [1] and pseudo-counts [2], as well as online episodic curiosity [3] for harder environments where stochasticity impedes naive prediction error-based curiosity. For a full discussion and accompanying results, we refer the reviewer to the above general response, as well as Tables 4 and 5 in Appendix A.\\n\\n---\\n\\n**Q6:** \\u201cMinor writing issues with some missing words\\u201d\\n\\n**A6:** We have made minor edits throughout to improve the readability of the paper. Thank you for the detailed read!\\n\\n\\n[1] Burda et al. \\u201cExploration by Random Network Distillation.\\u201d arXiv 2018.\\n\\n[2] Bellemare et al. \\u201cUnifying Count-Based Exploration and Intrinsic Motivation.\\u201d NeurIPS 2016.\\n\\n[3] Savinov et al. \\u201cEpisodic Curiosity Through Reachability.\\u201d arXiv 2018.\\n\\n[4] Houthooft et al. \\u201cVIME: Variational Information Maximizing Exploration\\u201d NeurIPS 2016.\"}", "{\"title\": \"Continuing Discussion with Reviewer 1k48 (1/2)\", \"comment\": \"We thank the reviewer for their timely engagement during the discussion period and efforts to improve our work. We have now directly linked our additional results in the appendix to our experimental section (Section 5.1). We continue the interesting discussion on the reviewer's points below:\\n\\n## Follow-Up to Noisy-TVs and Relevance Functions\\n\\nIndeed, we acknowledge that Table 5 demonstrates that different relevance functions can lead to different outcomes. Choosing the right relevance function for a particular task or environment is thus an important framework-level hyperparameter. \\n\\nOur perspective remains that this choice is relatively simple and less heuristic than many other choices commonly made in RL frameworks. We believe the conclusions from Table 5 of Appendix A should be that:\\n\\n1. ICM is a good default choice for $\\\\mathcal{F}$ (PGR (Curiosity) with no hyperparameter tuning outperforms the PPO baseline)\\n\\n2. ICM failing is a fundamental problem in exploration, endemic to many prediction-error based metrics. This problem is reasonably easy to visually diagnose (inspection of behavior learned by the PPO + Curiosity agent, as documented by Savinov et al. [3]) and quantitatively diagnose (performance of PGR (RND) also suffers).\\n\\n3. Choosing other more robust relevance functions such as ECO restores the advantages of PGR, allowing it to outperform the baseline that directly adds an ECO exploration bonus to model-free PPO. \\n\\nFinally, we buttress our claim that $\\\\mathcal{F}$ can be chosen to inform us apriori of transition tuples that are physically more/less desirable, and promote/penalize generating such transitions through appropriate conditioning. We point out that non-linear MPC-based controllers offer exactly the physically-grounded relevance functions we describe. Concretely, consider the task of quadruped locomotion. Then, the cost functions described in the optimal control problems formulated by Equations 9 or 10 of Corberes et al. [10] offer a concrete way to \\\"score\\\" motion trajectories. Specifically, during online learning of a neural network-based controller, given a particular state (e.g. proprioceptive measurements such as center-of-mass linear/angular velocity/acceleration) and control parameter (e.g. ground reaction forces), we can obtain a \\\"cost\\\" penalization term via a relevance function adopting the form of Equation 9 or 10 of Corberes et al. [10]. This is exactly the scalar conditioning term our diffusion model will accept. We can use a similar prompting strategy as described in Section 4.3 of our main text to densify the trajectories which exhibit the lowest $p\\\\%$ of \\\"cost,\\\" for some hyperparameter $p$. This line of reasoning extends to other domains (e.g. grasping/manipulation), where constraints on forces/dynamics are well understood from a model predictive control perspective (e.g. force closures and grasping kinematics in [11]).\\n\\nThus, PGR can also condition on relevance functions outside traditional measures of priority in PER literature or exploration bonuses in exploration literature. Again, in the interest of time, we cannot replicate the experimental setups of either [10] or [11], and so ascertaining the utility of PGR under these specific settings is not done. However, we believe it is clear that such formulations are an interesting nascent area of future work that PGR sets the foundation for.\\n\\n\\n[10] Corb\\u00e8res et al. Comparison of predictive controllers for locomotion and balance recovery of quadruped robots. ICRA 2021.\\n\\n[11] Gold et al. Model predictive interaction control for force closure grasping. CDC 2021.\"}", "{\"comment\": \"Dear reviewer 1k48,\\n\\nWe would like to provide a gentle reminder that today is the last day for author-reviewer discussions. If there are any remaining concerns, we would be grateful for an opportunity to address them. We would also appreciate if you could double check that your final score reflects your opinion after engaging with us during this discussion period.\\n\\nThank you again for your service.\"}", "{\"title\": \"Following Up on Discussion Period\", \"comment\": \"Dear reviewer 8yrp,\\n\\nThank you for the constructive feedback and suggestions. We greatly appreciate you updating your score. If there are any additional clarifications we can provide, please let us know. Thank you once again.\"}", "{\"title\": \"Response to reviewer 1k48 (1/2)\", \"comment\": \"Thank you very much for your review.\\n\\n**Q1:** How is this method novel with respect to prior work that uses intrinsic rewards on rollouts from a learned dynamics model?\\n\\n**A1:** **The PGR paradigm enables 1) generative _densification_ of online experience and 2) conditional _guidance_ via relevance functions. We show that each of these benefits uniquely positions our work amongst prior work.**\\n\\nFirstly, we note that densification is different from planning through the generative model entirely, as in e.g Dreamer [2][4]. In particular, Dreamer analytically backprops the gradients of dynamics through entire trajectories \\u201cgenerated\\u201d in latent space. This means that its generative world model is the bottleneck for learning. We show that in the presence of noisy/imperfect observations, approaches akin to this are less desirable than PGR.\\n\\nSpecifically, we perform an experiment comparing MAX[1] and Dreamer-v3[4] to PGR when the learned dynamics are misled by noisy observations. We inject Gaussian noise into some fraction of the observations in transitions fed to the world model in Dreamer-v3, as well as the observations in transitions fed to the exploration model ensemble in MAX. We similarly pollute the observations in transitions to the ICM module in PGR. For more details, we refer the reviewer to Appendix C. We reproduce results here for ease of viewing:\\n\\n| | | Cheetah-Run | Walker-Walk | Hopper-Hop |\\n|:----------:|-----------------|------------------------|------------------------|-----------------------|\\n| &#8593; | MAX | 644.79 $\\\\pm$ 63.90 | 509.63 $\\\\pm$ 48.76 | 48.30 $\\\\pm$ 16.56 |\\n| Original | Dreamer-v3 | 362.01 $\\\\pm$ 30.69 | 627.79 $\\\\pm$ 41.53 | 86.31 $\\\\pm$ 21.21 |\\n| &#8595; | PGR (Curiosity) | **817.36 $\\\\pm$ 35.93** | **865.33 $\\\\pm$ 75.39** | **94.45 $\\\\pm$ 12.07** |\\n| &#8593; | MAX | 363.26 $\\\\pm$ 58.86 | 421.06 $\\\\pm$ 33.77 | 17.07 $\\\\pm$ 9.82 |\\n| Noised | Dreamer-v3 | 199.74 $\\\\pm$ 34.60 | 386.13 $\\\\pm$ 59.71 | 35.42 $\\\\pm$ 18.03 |\\n| &#8595; | PGR (Curiosity) | **697.67 $\\\\pm$ 29.94** | **734.02 $\\\\pm$ 48.57** | **64.45 $\\\\pm$ 10.86** |\\n\\nAs we observe, PGR continues to outperform MAX and Dreamer-v3 under these noisier conditions. This is because Dreamer must learn the joint distribution over $p(s, a, s\\u2019, r)$ entirely, and thus any partial model errors propagate throughout generations, and also affect policy learning. In general, (with or without intrinsic reward,) planning _directly through_ a world model with inaccurate dynamics will naturally lead to poor performance. Contrast this with PGR, which more loosely couples the learning of environment dynamics (via intrinsic curiosity) from the rest of the generative world model (diffusion over $(s, a, s\\u2019, r)$ tuples). Concretely, PGR learns the dynamics $p(s\\u2019 | s, a)$, and conditions on dynamics error $e$ to generate $(s, a, s\\u2019, r | e)$. This means that even with an imperfect dynamics model, we may obtain a poor signal through $e$, but the generated transitions $(s, a, s\\u2019, r)$ still remain faithful to the ground truth environment dynamics! Furthermore, densification means PGR also grounds synthetic data with real data, allowing the policy to be more robust to any erroneous synthetic generations.\\n\\nSecondly, we are not concerned with intrinsic rewards for better exploration as in [1], which is often the use case for combining intrinsic rewards and model-based RL algorithms [1][5][6]. Using intrinsic reward to guide exploration is fundamentally different from using relevance functions to guide generation. In fact, PGR is not attached to using intrinsic rewards, at all. Our paradigm centers on _guidance_ for generative replay, meaning we can condition on **any** relevance function with desired properties. For example, we may choose $\\\\mathcal{F}$ to explicitly avoid undesirable degenerate solutions. Concretely, we may have physics-based models which inform us apriori which $(s, a, s\\u2019, r)$ transition tuples are more likely to lead to unstable gaits for locomotion tasks, and penalize generating such transitions through an appropriate negative condition. Or, we might want to avoid the boundaries of a workspace and therefore decrease generations of $(s, a, s\\u2019, r)$ tuples so that $(s + a)$ is outside some epsilon bubble of the edges of the workspace.\\n\\nDue to time constraints we do not provide an empirical verification of these experiments. However, we believe that by showing an instantiation of PGR with ICM, and empirically grounding why and how it is superior to other simpler choices, our work opens up these promising new avenues for future research in online RL.\\n\\nIn the meantime, we have softened the language in the paper which might erroneously provide the impression that ICM is the ideal, optimal or only choice for relevance functions.\"}", "{\"comment\": \"I would like to thank the authors for putting in all the work in this rebuttal and for clarifying my questions as well as the questions of the other reviewers. Their efforts have been helpful in improving my understanding of their work and I am updating my score to reflect this.\"}", "{\"title\": \"Response to reviewer 8yrp (1/2)\", \"comment\": \"Thank you very much for your review.\\n\\n**Q1:** \\u201cHow robust is PGR to errors in the learned dynamics model? Are there ways to mitigate the impact of inaccurate dynamics predictions on the curiosity-based relevance function?\\u201d\\n\\n**A1:** **Like most works in model-based RL, PGR is indeed susceptible to errors in the learned dynamics model of the curiosity-based relevance function. However, in practice, we show that PGR is far more robust than other approaches, especially those which directly plan through the dynamics model.**\\n\\nSpecifically, we perform an experiment comparing MAX[1] and Dreamer-v3[4] to PGR when the learned dynamics are misled by noisy observations. We inject Gaussian noise into some fraction of the observations in transitions fed to the world model in Dreamer-v3, as well as the observations in transitions fed to the exploration model ensemble in MAX. We similarly pollute the observations in transitions to the ICM module in PGR. For more details, we refer the reviewer to Appendix C. We reproduce results here for ease of viewing:\\n\\n| | | Cheetah-Run | Walker-Walk | Hopper-Hop |\\n|:----------:|-----------------|------------------------|------------------------|-----------------------|\\n| &#8593; | MAX | 644.79 $\\\\pm$ 63.90 | 509.63 $\\\\pm$ 48.76 | 48.30 $\\\\pm$ 16.56 |\\n| Original | Dreamer-v3 | 362.01 $\\\\pm$ 30.69 | 627.79 $\\\\pm$ 41.53 | 86.31 $\\\\pm$ 21.21 |\\n| &#8595; | PGR (Curiosity) | **817.36 $\\\\pm$ 35.93** | **865.33 $\\\\pm$ 75.39** | **94.45 $\\\\pm$ 12.07** |\\n| &#8593; | MAX | 363.26 $\\\\pm$ 58.86 | 421.06 $\\\\pm$ 33.77 | 17.07 $\\\\pm$ 9.82 |\\n| Noised | Dreamer-v3 | 199.74 $\\\\pm$ 34.60 | 386.13 $\\\\pm$ 59.71 | 35.42 $\\\\pm$ 18.03 |\\n| &#8595; | PGR (Curiosity) | **697.67 $\\\\pm$ 29.94** | **734.02 $\\\\pm$ 48.57** | **64.45 $\\\\pm$ 10.86** |\\n\\nAs we observe, PGR continues to outperform MAX and Dreamer-v3 under these noisier conditions. This is because Dreamer must learn the joint distribution over $p(s, a, s\\u2019, r)$ entirely, and thus any partial model errors propagate throughout generations, and also affect policy learning. In general, (with or without intrinsic reward,) planning _directly through_ a world model with inaccurate dynamics will naturally lead to poor performance. Contrast this with PGR, which more loosely couples the learning of environment dynamics (via intrinsic curiosity) from the rest of the generative world model (diffusion over $(s, a, s\\u2019, r)$ tuples). Concretely, PGR learns the dynamics $p(s\\u2019 | s, a)$, and conditions on dynamics error $e$ to generate $(s, a, s\\u2019, r | e)$. This means that even with an imperfect dynamics model, we may obtain a poor signal through $e$, but the generated transitions $(s, a, s\\u2019, r)$ still remain faithful to the ground truth environment dynamics! Furthermore, densification means PGR also grounds synthetic data with real data, allowing the policy to be more robust to any erroneous synthetic generations.\\n\\nMore generally, one way to mitigate the impact of inaccurate dynamics predictions for PGR is to leverage a more robust relevance function $\\\\mathcal{F}$. For example, one could use a variational approach to intrinsic rewards such as VIME [4], with better calibrated uncertainty, to either a) directly do conditional generation of more robust transitions, or b) to downweigh generated samples when the dynamics are highly uncertain.\\n\\n---\\n\\n**Q2:** \\u201cPGR scales better at r=0.75 than SynthER but neither benefits from 0.875. We would think the trend would be consistent? What is the intuition behind this?\\u201d\\n\\n**A2:** **There is nothing better than ground truth data directly obtained in the \\u201creal world\\u201d. While generative replay is a useful mechanism to densify past real-world online experience, further work is needed to completely replace real environment interactions with synthetic ones.**\\n\\nWe show in Figure 5 of the main text that our diffusion model is a reasonable approximation to the simulator dynamics, but this MSE is still non-zero. When the synthetic data ratio is too high, it is plausible that the model begins fitting to the biased distribution of the diffusion model. Thus, overly relying on synthetic data may result in policy collapse.\\n\\nNote that we did not tune the hyperparameter controlling the frequency of the inner loop when varying the synthetic data ratio $r$. We hypothesize that 10K iterations per inner loop might no longer be optimal for higher synthetic data ratios, since more regularly grounding the diffusion model in the real simulator dynamics becomes more critical if we rely more heavily on the generated data for policy learning.\"}", "{\"comment\": \"Thank you for providing detailed answers to my questions and adding in some extra details - such as relevance functions etc.\\nThis gives me much more clarity of the work and I believe this would be a good addition to the community. I have raised my score to reflect my new opinion.\"}", "{\"summary\": \"The paper proposes to use conditional diffusion models to improve the experience replay for an RL learning agent. The method proposed improves performance by improving the diversity of the samples in the experience replay buffer and reducing overfitting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written and provides a clear explanation of their method.\\n2. The research problem addressed in the paper is well laid out and is an important one to improve the performance of RL methods.\", \"weaknesses\": \"1. While the method shows improved performance, it is a bit simple as it combines existing elements in diffusion models and RL to propose the solution.\\n2. It would be useful to compare the effect of different kinds of exploration bonuses.\", \"questions\": \"1. Is the method compatible with different kinds of exploration bonuses? If so, how do you think they would compare?\\n2. How do you think the method would do when simply having diverse samples does not imply usefulness? An example is the noisy tv problem.\\n3. How sensitive is the algo towards the frequency of the inner loop in Algo 1?\\n4. Can multiple relevance functions be combined?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a conditional diffusion model as a synthetic replay buffer to combat early overfitting in online reinforcement learning and to diversify experiences for the learning model. This is achieved with a relevance function that selects samples that are rare but important for learning based on the temporal difference error, the value of a learned Q-function and a modified intrinsic curiosity module.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"One of the strength of this paper are the clear and concise language as well as good structured presentation of the proposed method.\\nIt is quite logical to improve on the already existing prioritized experience replay method and implement it in the generative domain. The method is explained well and should be quite easily reproducable.\\nOverall the research could be a valuable contribution to the reinforcement learning community.\", \"weaknesses\": \"A topic i feel like missed somewhat are the different ways to approach generative replay such as mentions of other generative models (e.g. variational auto encoders, gaussian mixture models) and why they were not used.\\nOne thing i found rather off putting and this is very nitpicky is that the Tables 1, 2 and 3 are a bit crammed and slightly off from each other.\", \"questions\": \"What exploration method does the agent use?\\nCould the exploration method be improved instead of the sample generation to improve diversity of samples?\\nWould a combination of both a better exploration and this method be the optimal and a possible solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work proposes a form of sample based experience replays that leverages a generative model to provide and augment samples drawn from the replay buffer. To avoid overfitting, a set of guidance functions are used to steer the generative process toward diverse and useful samples. The generative replay mechanism is a diffusion model that is conditioned on some auxiliary information. The authors propose a few different versions of this conditioning such as intrinsic curiosity, TD error, or Q values. The idea is that using these scores, the generative model can be steered towards generating high quality samples. Given such a replay mechanism, this work evaluates model free and model-based RL agents trained via this generative replay on gym and dmc.The results show improvement on both pixel based and state based tasks. There are also ablations with larger policy networks and higher generative data rations, which show further improvements.\\n\\n-------------------------------------------------\\nI thank the authors for a substantive rebuttal that addressed my and (as far as I can tell) other concerns. I therefore raise my score to an 8.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work proposes a scalable method for training model-free or model-based agents in a variety of domains. I believe the formulation is simple enough to be integrated into and improve other approaches.\", \"I also found the presentation clear and easy to read.\", \"I found the scaling experiments to be very compelling, I'm a little concerned about the general thrust of driving up the syn-real data ratio as high as possible, since we do need to ground the generations in real experience. But I still think insights here are valuable.\"], \"weaknesses\": \"I have two points of contention with this work.\\n1. From a paradigm perspective, I don't understand how this is different from prior work in model-based RL that apples intrinsic rewards to a learned dynamics model [1] or world-model [2]. These methods also utilize a generative model as a copy of the environment, then train the agent in simulation to acquire interesting data (under the intrinsic reward). It seems that this method does the same, except that instances, rather than full trajectories are generated. I do see how this is different than just applying an intrinsic bonus during training, since here the synthetic data has a chance to be more diverse. \\n\\n2. I thank the authors for providing numerous experiments, but I am not at all convinced that this method is robust to the choice of guiding function F. ICM is known to be susceptible to the noisy TV problem, where difficult-to-model environmental factors score arbitrarily high under ICM. The chosen tasks are too simple perceptually to see this problem. This in and of itself is not a problem, but it means that we need to search for another F that works for our task which is hard in practice. In the meantime, there are other intrinsic rewards that do not suffer from this pathology [3]. \\n\\n\\n\\n[1] Shyam, Pranav, Wojciech Ja\\u015bkowski, and Faustino Gomez. \\\"Model-based active exploration.\\\" International conference on machine learning. PMLR, 2019.\\n\\n[2] Hafner, Danijar, et al. \\\"Dream to control: Learning behaviors by latent imagination.\\\" arXiv preprint arXiv:1912.01603 (2019).\\n\\n[3] Savinov, Nikolay, et al. \\\"Episodic curiosity through reachability.\\\" arXiv preprint arXiv:1810.02274 (2018).\", \"questions\": \"I'll rephrase my above concerns as questions.\\n\\n1. How is this method novel with respect to prior work that uses intrinsic rewards on rollouts from a learned dynamics model? It seems like a very similar approach to acquiring data that scores well under a given guidance function F, where F can be ICM or another intrinsic reward. \\n\\n2. How does this method handle noisy-tvs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Following up on Discussion Period\", \"comment\": \"Dear reviewer 1k48,\\n\\nThank you for the constructive feedback and suggestions. We appreciate your engagement in the discussion period; if you feel that we have sufficiently addressed your questions and alleviated your concerns, we would also appreciate if you would consider updating your score.\\n\\nPlease let us know if there are other additional clarifications we can provide. Thank you once again.\"}", "{\"title\": \"Following Up on Discussion Period\", \"comment\": \"Dear reviewer Yyu8,\\n\\nThank you for the constructive feedback and suggestions. As the discussion period is coming to a close, we would appreciate you kindly reading our response. If we have addressed your questions and alleviated your concerns, we would also appreciate if you would consider updating your score. \\n\\nPlease let us know if there are other additional clarifications we can provide. Thank you once again.\"}", "{\"comment\": \"Dear Authors,\\nsorry for the late response to your rebuttal. Thank you for your detailed answers to the questions I had. Your thorough rebuttal has alleviated all my concerns regarding your paper. I find it particularly interesting how VAEs and GANs perfomed within your context or rather didn't perform all too well. I personally think that better exploration strategies can be interesting and necessary future research to improve not only your approach but others as well.\"}", "{\"title\": \"Response to reviewer 1k48 (2/2)\", \"comment\": \"**Q2:** \\u201cHow does this method handle the noisy-tv [problem]?\\u201d\\n\\n**A2:** **We also show in our general response that PGR can effectively condition on relevance functions based on RND [7], pseudo-counts [8] or online episodic curiosity (ECO) [3], which all demonstrated reliably evading stochastic noise sources in prior work.**\\n\\nThe relevant discussion can be found under the first question in the above general response, with experimental details in Appendix A in the updated draft. To summarize, PGR using RND or ECO as relevance functions continues to enjoy superior performance over baselines that do not perform generative replay. \\n\\nOur research contribution is _not_ the choice of relevance function. We merely use prediction error-based curiosity to illustrate a strong candidate with more desirable properties than previous obvious choices such as reward, value or TD-error, and provide an empirical explanation for why. That is, our core contribution centers on the value of this generative replay paradigm as a whole, particularly in online settings. Under this problem setting, data is expensive to obtain (motivating generative densification) and relevant data is scarce (motivating generative guidance). These are two issues fundamentally addressed by the PGR paradigm.\\n\\n**Finally, as the reviewer identifies, all this implies that we may need to search for an effective relevance function appropriate for our task and environment. However, we argue that this is not in practice an issue.**\\n\\nFirstly, we have shown that curiosity as defined via the ICM prediction error is a broadly effective default choice for PGR (in most standard online RL environments accepted by the wider community.) Even when ICM fails, as Savinov et al. [3] in Section 4.2 point out, \\u201cvisual inspection of the ICM learnt behaviour\\u201d makes it obvious to diagnose the noisy-TV problem, in which case we can opt for more robust relevance functions such as ECO. That is, we view the choice of relevance function as a hyperparameter, with curiosity \\u00e0 la ICM as a strong default. This is no different from the minimum reachable distance, task/bonus weights or episodic memory buffer hyperparameters described in Section 2.2 and 2.3 of Savinov et al. [3], which are requisite for the strong performance of ECO.\\n\\nSecondly, we feel that the majority of prior work investigating the noisy-TV problem with empirical results do so in artificial settings which do not reflect most settings where randomness is more structured. Concretely, Savinov et al. [3], Raileanu & Rockt\\u00e4schel [9] and Burda et al. [7] (who first coined the noisy-TV term), all characterize this problem in a grid-like maze navigation setting, where often a literal noisy TV screen is playing random noise/images. In our particular online setting, that of continuous control with underlying physics-driven structure, this problem is very far removed. In general, we believe that in _most_ practical online settings, including the real world, stimuli may initially appear random to the learning agent, but such stochasticity is generally structured and exploitable for the vast majority of interesting tasks.\\n\\n[1] Shyam et al. \\u201cModel-based Active Exploration.\\u201d ICML 2019.\\n\\n[2] Hafner et al. \\u201cDream to Control: Learning Behaviors by Latent Imagination.\\u201d arXiv 2019.\\n\\n[3] Savinov et al. \\u201cEpisodic Curiosity Through Reachability.\\u201d arXiv 2018.\\n\\n[4] Hafner et al. \\u201cMastering Diverse Domains through World Models.\\u201d arXiv 2023.\\n\\n[5] Mendoca et al. \\u201cDiscovering and Achieving Goals via World Models.\\u201d NeurIPS 2021.\\n\\n[6] Sekar et al. \\u201cPlanning to Explore via Self-Supervised World Models.\\u201d ICML 2020.\\n\\n[7] Burda et al. \\u201cExploration by Random Network Distillation.\\u201d arXiv 2018.\\n\\n[8] Osband et al. \\u201cDeep Exploration via Bootstrapped DQN.\\u201d NeurIPS 2016.\\n\\n[9] Raileanu & Rockt\\u00e4schel. \\u201cRIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments.\\u201d ICLR 2020.\"}", "{\"title\": \"Response to reviewer Yyu8\", \"comment\": \"Thank you very much for your review.\\n\\n**Q1:** \\u201cDiscussing different ways to approach generative replay such as mentions of other generative models (e.g. variational auto encoders, gaussian mixture models) and why they were not used.\\u201d\\n\\n**A1:** **We believe that VAEs and GANs have issues which make comparison to baselines harder and would only obfuscate our contribution.** In particular, GANs are sensitive to hyperparameters and much harder to train [1]. VAEs do not have the generation quality that is required to achieve comparable results to methods like SynthER. A baseline level of generation quality is especially important in the online setting, when initial data is noisy.\\n\\nTo verify this, we replace the underlying diffusion model in SynthER and PGR with an unconditional and conditional VAE, respectively. We design our VAE to approximate the capacity of the diffusion model in both SynthER and PGR (~6.8 million parameters). Specifically, we use a ResNet-style encoder and decoder with 4 and 8 layers, respectively. Each residual layer has bottleneck dimension 128, with a latent dimension of 32, resulting in 2.6 million encoder and 5.3 million decoder parameters. Note that this model architecture is also highly similar to the residual MLP denoiser used in the diffusion models of SynthER and PGR.\\nResults on three tasks (state-based DMC-100k) are shown in Appendix D, with results reproduced below for ease of viewing: \\n\\n| | Quadruped-Walk | Cheetah-Run | Reacher-Hard |\\n|-------------------|------------------------|------------------------|------------------------|\\n| REDQ | 496.75 $\\\\pm$ 151.00 | 606.86 $\\\\pm$ 99.77 | 733.54 $\\\\pm$ 79.66 |\\n| SynthER | 727.01 $\\\\pm$ 86.66 | 729.35 $\\\\pm$ 49.59 | 838.60 $\\\\pm$ 131.15 |\\n| Unconditional VAE | 384.38 $\\\\pm$ 154.30 | 549.36 $\\\\pm$ 190.79 | 700.65 $\\\\pm$ 161.18 |\\n| Conditional VAE | 501.99 $\\\\pm$ 79.88 | 668.49 $\\\\pm$ 76.81 | 792.85 $\\\\pm$ 93.73 |\\n| PGR (Curiosity) | **927.98 $\\\\pm$ 25.18** | **817.36 $\\\\pm$ 35.93** | **915.21 $\\\\pm$ 48.24** |\\n\\nWe conclude that a) indeed conditional generation continues to outperform unconditional generation, but b) overall performance is much lower than either SynthER or PGR, arguably performing within variance of the REDQ baseline. We believe this performance gap well justifies the focused use of diffusion models.\\n\\n---\\n\\n**Q2:** \\u201cFormatting of Tables 1, 2, and 3.\\u201d\\n\\n**A2:** We have reformatted the paper to better accommodate Tables 1, 2 and 3. Thank you for the detailed comment!\\n\\n---\\n\\n**Q3:** \\u201cWhat exploration method does the agent use? Could the exploration method be improved instead of the sample generation to improve diversity of samples? Would a combination of both a better exploration and this method be the optimal and a possible solution?\\u201d\\n\\n**A3:** **PGR does not rely on any particular exploration method. Indeed, better exploration can be orthogonally combined with PGR as a plausible approach.**\\n\\nNote that while better exploration alone may improve the diversity of samples, we show in Appendix B that the densification offered via generative replay is independently and critically important to PGR\\u2019s improved policy learning results.\\n\\nAs we illustrate in our general response above, PGR offers distinct advantages over explicit exploration bonuses, and potentially can be combined with implicit exploration bonuses. While we do not look at exploration in this work, we believe that these initial findings have now opened up an interesting intersection between generative replay and exploration for future works in online RL.\\n\\n\\n[1] Salimans et al. \\u201cImproved Techniques for Training GANs.\\u201d arXiv 2016.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Following Up on Discussion Period\", \"comment\": \"Dear reviewer ir9Q,\\n\\nThank you for the constructive feedback and suggestions. As the discussion period is coming to a close, we would appreciate you kindly reading our response. If we have addressed your questions and alleviated your concerns, we would also appreciate if you would consider updating your score. \\n\\nPlease let us know if there are other additional clarifications we can provide. Thank you once again.\"}", "{\"title\": \"Continuing Discussion with Reviewer 1k48 (2/2)\", \"comment\": \"## Follow-Up to Comparison to Model-Based Methods\\n\\nWe better clarify the difference between \\\" 'densification of the existing data distribution' and 'improving the data distribution' via PGR and Exploration, respectively.\\\"\\n\\nWe would like to highlight the results in Table 5 of Appendix A and Table 6 of Appendix B, which demonstrate that adding direct exploration bonuses of various kinds, across different tasks and environments, underperforms PGR. Moreover, we believe Table 7 of Appendix B makes a preliminary, albeit tenuous, argument that exploration may be entirely complementary to PGR in certain other tasks and environments. Indeed, these results indicate that there may also be a difference, if only at an empirical level, between PGR and exploration.\\n\\nOn a more abstract or ideological level, we strongly believe these two approaches are conceptually different because:\\n\\n1. Densification can benefit from the generalization capabilities of the generative (diffusion) model, interpolating transitions to hitherto unseen regions of the MDP. On the other hand, exploration-centric approaches require us to explicitly access those transitions during interaction rollouts and enter them into our replay buffer before we can train on them.\\n\\n2. Densification is directly concerned with improving the learning dynamics of the policy (e.g. via reducing overfitting of the Q functions), whereas exploration is predominantly concerned with completing the data distribution to e.g. better access sparse rewards or identify transitions which can possibly lead to more rewarding behaviors.\\n\\n3. PGR can choose any relevance function $\\\\mathcal{F}$ for conditioning, for different desired end goals (we concretize this original claim in the future experiment spaces proposed above.) Exploration-based functions are merely one such possible set of instantiations for $\\\\mathcal{F}$.\"}", "{\"summary\": \"This paper introduces a framework called Prioritized Generative Replay (PGR), a novel approach to enhance sample efficiency in online reinforcement learning (RL). Traditionally, replay buffers store experienced transitions and replay them uniformly or with prioritization based on metrics like TD-error. However, the authors point out that uniform replay can be inefficient, and prioritization can lead to overfitting. PGR addresses these issues by using a conditional generative model to create a parametric replay buffer.\", \"the_paper_claims_that_this_allows_for_two_key_advantages\": \"1) Densification: The generative model can create new, plausible transitions beyond those directly experienced, enriching the training data, especially in sparsely explored regions.\\n2) Guidance: By conditioning the generative model on \\\"relevance functions,\\\" the generated transitions can be steered towards areas more critical for learning, such as states with high novelty or uncertainty.\\n\\nThe authors also explore various relevance functions, including return, TD-error, and curiosity. They find that curiosity, based on the prediction error of a learned dynamics model, performs best. This is attributed to its ability to promote diversity in the generated transitions, thus reducing overfitting. They also show that their approach consistently improves performance and sample efficiency in both state- and pixel based domains.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) PGR offers a fresh perspective on replay buffers by combining generative modeling with guided replay. Framing this problem as a conditional generation problem with diffusion models is novel.\\n2) Diffusion model typically uses one single set of HPs requires no additional tuning I'd assume. This works well for PGR\\n3) Empirical results on various benchmarks demonstrate that PGR consistently outperforms existing model-free and model-based RL algorithms, as well as a generative replay baseline without guidance. Also has been shown to work in both state-based and pixel-based environments. \\n4) PGR is shown to scale well with larger policy networks and higher synthetic-to-real data ratios (important ablation that I wanted to see), potentially enabling more data-efficient training of large-scale RL agents. Really important result for scaling to many real use cases.\\n5) The authors also provide insights into why PGR works, particularly highlighting the role of curiosity in promoting diversity and reducing overfitting.\", \"weaknesses\": \"1) The curiosity-based relevance function relies on a learned dynamics model, which might be challenging to train accurately in complex environments.\\n2) Increasing Synthetic Data ratio does not benefit PGR and the unconditional baseline (SynthER) equally. PGR scales better at r=0.75 than SYNTHER but neither benefits from 0.875. We would think the trend would be consistent? whats the intution behind this? Also this figure 7 could be improved with the variation in r being shown\\n3) (Minor) writing issues throughout the paper with some missing words etc. Please re-read the paper and make the necessary changes.\", \"questions\": \"1) How robust is PGR to errors in the learned dynamics model? Are there ways to mitigate the impact of inaccurate dynamics predictions on the curiosity-based relevance function?\\n2) Could PGR be extended to offline RL settings? If so, what modifications would be necessary?\\n3) How does PGR's performance compare against PER baselines which use approximate parametric models of prior experience?\\n4) Are there any other relevance functions thats been tried out? As thats core to the working of PGR.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Rebuttal\", \"comment\": \"Thank you very much for your thorough rebuttal. I am going to consolidate my replies here.\\n\\n## Overview \\n\\nI had two issues, one was the equivocation between model-based methods like MAX and Dreamer, and the other was the difficulty of choosing a relevance function. I think both concerns have been addressed, and I encourage the authors to add as many of these new results to the main paper as they can.\", \"specifically\": \"## Noisy-TVs and Relevance Functions\\n\\nI'm ok with the assertion that noisy-tvs are a thought experiment -- and taken too literally as a criticism. I was mainly concerned with the difficulty of choosing an appropriate F. We may indeed have such a physics model, but I think in practice that is unlikely. Table 4 in the appendix shows some robustness to choice of exploration bonus, and I think that's good enough for me. I would like the author's take on table 5, it seems to suggest that this method does in fact suffer from this problem to some degree. \\n\\n## Comparison to Model-based methods\\n\\nI appreciate the new table giving a comparison between Dreamer, MAX, and PGR. I still don't think there's much of a difference between \\\"densification of the existing data distribution\\\" and \\\"improving the data distribution\\\" via PGR and Exploration respectively. But I acknowledge that the looser dependance on grounded trajectories obtained via PGR vs a world-model makes a difference empirically. \\n\\n---------------------------------------------------------\\n\\nPending the rest of the rebuttal, I am inclined to raise my score to a 7.\"}", "{\"title\": \"Response to reviewer ir9Q (2/2)\", \"comment\": \"**Q4:** \\u201cHow sensitive is the algo towards the frequency of the inner loop in Algo 1?\\u201d\\n\\n**A4:** **PGR benefits from a more frequent inner loop in Algo 1, but the tradeoff with training time is sublinear. In our work, we identify via a simple elbow validation curve that a frequency of once every 10K iterations is optimal.**\\n\\nIn Figure 9 of Appendix D, we plot the performance of PGR on the hopper-stand environment as a function of the frequency of the inner loop. As mentioned in Section 5.1 of the main text, this is the environment upon which we tuned the hyperparameters for PGR. We show that performance increases modestly the more frequently the inner loop is performed. This is intuitive \\u2014 grounding the diffusion model more regularly in the real transitions allows it to better generate on-policy transitions, improving the stability of policy learning.\\n\\nTo tradeoff effectively between diminishing returns and training time, we use a heuristic elbow method on Figure 9, concluding that performing the inner loop once every 10K iterations is \\u201coptimal.\\u201d\\n\\n---\\n\\n**Q5:** \\u201cCan multiple relevance functions be combined?\\u201d\\n\\n**A5:** **Multiple relevance functions can indeed be combined. While this is not a focal point of our work, we highlight a few ways to do so here.**\\n\\nRelevance functions that provide complementary benefits can be easily combined. One way to do so is to sum or concatenate the function embeddings, after appropriate normalization. Then, passing the new scalar or vector embedding to the existing framework blithely is sufficient. Another reasonable modification is to allow separate conditioning on either or both function embedding within the diffusion model. Note this would involve more careful tuning of the CFG coefficients corresponding to each relevance function condition.\\n\\nIn practice, complementary relevance functions in the online setting are non-obvious to come up with. As we show in our work, PGR with curiosity is unanimously superior to reward, value or TD-error based relevance functions. We believe that this line of research is more relevant for the offline setting, where large-scale pretrained priors with complementary properties can be used for guided generation. However, if the reviewer feels that this discussion is important to include in the main text, we are happy to add an additional section within the appendix.\\n\\n[1] Burda et al. \\u201cExploration by Random Network Distillation.\\u201d arXiv 2018.\\n\\n[2] Bellemare et al. \\u201cUnifying Count-Based Exploration and Intrinsic Motivation.\\u201d NeurIPS 2016.\\n\\n[3] Savinov et al. \\u201cEpisodic Curiosity Through Reachability.\\u201d arXiv 2018.\\n\\n[4] Osband et al. \\u201cDeep Exploration via Bootstrapped DQN.\\u201d NeurIPS 2016.\\n\\n[5] Fortunato et al. \\u201cNoisy Networks for Exploration.\\u201d ICLR 2018.\"}", "{\"title\": \"General Response (2/2)\", \"comment\": \"### 2. Contextualizing Exploration Bonuses Within the PGR Framework (Reviewers 1k48, Yyu8, ir9Q)\\n\\n**Our contributions are orthogonal to exploration. We do not make any claims about improved exploration, nor is this a focal point of our problem setting. Nonetheless, we show here that existing work in enhancing exploration can be easily and effectively integrated into the PGR framework.**\\n\\nWe first direct the reviewers to Figure 3b of the main text, which offers initial evidence that PGR goes beyond simple exploration bonuses. In particular, providing either the unconditional replay baseline (SynthER) or the model-free RL baseline (REDQ) with an exploration bonus, via intrinsic curiosity, underperforms PGR. We provide quantitative numbers in Table 6 of Appendix B, along with a repeat of this experiment using RND [1]. We reproduce the results here:\\n\\n| | Quadruped-Walk | Cheetah-Run |\\n|---------------------|------------------------|------------------------|\\n| REDQ | 496.75 $\\\\pm$ 151.00 | 606.86 $\\\\pm$ 99.77 |\\n| SynthER | 727.01 $\\\\pm$ 86.66 | 729.35 $\\\\pm$ 49.59 |\\n| REDQ + Curiosity | 687.14 $\\\\pm$ 93.12 | 682.64 $\\\\pm$ 52.89 |\\n| SynthER + Curiosity | 803.87 $\\\\pm$ 41.52 | 743.39 $\\\\pm$ 47.60 |\\n| PGR (Curiosity) | **927.98 $\\\\pm$ 25.18** | **817.36 $\\\\pm$ 35.93** |\\n\\n\\nFor further experimental analysis, we also add two baselines which modify neural network architectural inductive priors to implicitly improve exploration. In particular, we adapt both Bootstrapped-DQN [4] and NoisyNets [5] to our REDQ baseline. For training details and results we refer reviewers to Appendix B (c.f. Table 7). We reproduce the results here:\\n\\n| | Quadruped-Walk | Cheetah-Run |\\n|-----------------|------------------------|------------------------|\\n| NoisyNets | 688.29 $\\\\pm$ 65.55 | 770.14 $\\\\pm$ 70.53 |\\n| Boot-DQN | 721.49 $\\\\pm$ 39.82 | 754.56 $\\\\pm$ 67.38 |\\n| PGR (NoisyNets) | **939.32 $\\\\pm$ 36.74** | 893.67 $\\\\pm$ 43.47 |\\n| PGR (Boot-DQN) | 903.29 $\\\\pm$ 40.50 | **912.65 $\\\\pm$ 47.22** |\\n| PGR (Curiosity) | 927.98 $\\\\pm$ 25.18 | 817.36 $\\\\pm$ 35.93 |\\n\\n**We argue that densification under PGR is empirically orthogonal to and more useful than simply promoting exploration.**\\n\\nWe provide some simple intuition on this conclusion, which also further distinguishes our exploration-agnostic contributions: PGR densifies the relevant subset of _existing_ data to improve learning dynamics, whereas exploration is primarily concerned with improving the distribution of this data in the first place. We do not make this conclusion in the main text, as DMC/OpenAI Gym benchmarks may be too facile for exploration-centric conclusions. Nevertheless, taken together, the comparative experiments above strongly suggest that generative replay in PGR is effective and orthogonal to exploration bonuses.\\n\\n[1] Burda et al. \\u201cExploration by Random Network Distillation.\\u201d arXiv 2018.\\n\\n[2] Bellemare et al. \\u201cUnifying Count-Based Exploration and Intrinsic Motivation.\\u201d NeurIPS 2016.\\n\\n[3] Savinov et al. \\u201cEpisodic Curiosity Through Reachability.\\u201d arXiv 2018.\\n\\n[4] Osband et al. \\u201cDeep Exploration via Bootstrapped DQN.\\u201d NeurIPS 2016.\\n\\n[5] Fortunato et al. \\u201cNoisy Networks for Exploration.\\u201d ICLR 2018.\"}", "{\"title\": \"General Response (1/2)\", \"comment\": \"We thank the reviewers for their helpful comments on our work. We are glad to see the reviewers believe our problem setting is \\u201can important one to improve the performance of RL methods\\u201d (reviewer ir9Q) and that our work in this setting is a \\u201cvaluable contribution to the RL community\\u201d (reviewer Yyu8). Thank you also for finding our approach to be \\\"simple\\\" (reviewers 1k48, Yyu8), \\\"reproducible\\\" (reviewer Yyu8), \\\"novel\\\" (reviewer 8yrp) and \\\"worth integrating with other approaches\\\" (reviewer 1k48). We are especially grateful that the reviewers recognized the importance of our scaling experiments for \\u201cenabling more data-efficient training of large-scale RL agents\\u201d (reviewer 8yrp) and the resulting insights to be \\\"compelling and valuable\\\" (reviewer 1k48).\\n\\nWe now turn to two common questions amongst the reviewers, before addressing specific comments in individual responses below.\\n\\n### 1. Choice of Relevance Function $\\\\mathcal{F}$ and the Noisy-TV Problem (Reviewers 1k48, ir9Q, 8yrp)\\n\\n**Our framework is compatible with a wide variety of relevance functions, such as RND [1] or pseudo-count [2], which have all been demonstrated in prior work to be robust to the noisy-TV problem.**\\n\\nWe have added to the appendix additional results on our pixel-based DMC-100k benchmark, which demonstrate that PGR conditioned on more robust curiosity-based metrics continue to enjoy strong performance. In particular, we look at intrinsic rewards obtained via Random Network Distillation (RND) [1] and pseudo-counts from a context tree switching (CTS) density model [2]. For training details we refer reviewers to Appendix A.1. We reproduce the results here:\\n\\n| | Walker-Walk | Cheetah-Run |\\n|-----------------|------------------------|------------------------|\\n| DrQ-v2 | 514.11 $\\\\pm$ 81.42 | 489.30 $\\\\pm$ 69.26 |\\n| SynthER | 468.53 $\\\\pm$ 28.65 | 465.09 $\\\\pm$ 28.27 |\\n| PGR (RND) | **602.10 $\\\\pm$ 43.44** | 512.58 $\\\\pm$ 23.81 |\\n| PGR (CTS) | 540.78 $\\\\pm$ 88.27 | 508.17 $\\\\pm$ 74.03 |\\n| PGR (Curiosity) | 570.99 $\\\\pm$ 41.44 | **529.70 $\\\\pm$ 27.76** |\\n\\n**But what about more complex environments which actually have noisy TVs? We investigate the setting described in Savinov et al. [3], and simultaneously demonstrate that PGR is also compatible with relevance functions defined via (online) episodic curiosity (ECO).**\\n\\nIn particular, we use the \\u201cnoise\\u201d-randomized versions of the DMLab environment in Savinov et al. [3]. This environment features a maze navigation task, where visual inputs constantly feature uniform random RGB noise in the lower right portion of the image (i.e. a noisy TV). For a visualization of the environment and training details, we refer reviewers to Appendix A.2, and also to Tables S12 and S13 in Section S6 of Savinov et al. [3]. We reproduce the results here:\\n\\n| | Sparse | Very Sparse |\\n|-----------------|--------------------|--------------------|\\n| PPO | 7.3 $\\\\pm$ 2.7 | 4.1 $\\\\pm$ 2.4 |\\n| PPO + Curiosity | 5.6 $\\\\pm$ 1.8 | 2.8 $\\\\pm$ 1.0 |\\n| PPO + RND | 8.2 $\\\\pm$ 1.9 | 4.0 $\\\\pm$ 1.1 |\\n| PPO + ECO | 16.3 $\\\\pm$ 3.6 | 12.9 $\\\\pm$ 1.9 |\\n| PGR (Curiosity) | 9.3 $\\\\pm$ 2.0 | 5.7 $\\\\pm$ 2.2 |\\n| PGR (RND) | 11.2 $\\\\pm$ 1.0 | 8.0 $\\\\pm$ 1.8 |\\n| PGR (ECO) | **21.9 $\\\\pm$ 2.6** | **18.7 $\\\\pm$ 2.1** |\\n\\nWe see that PGR can flexibly condition on relevance functions like episodic curiosity to restore its superior performance in environments with highly stochastic transitions.\\n\\n**But more crucially, generative replay with the \\u201cright\\u201d relevance function $\\\\mathcal{F}$ is always better than without.**\\n\\nOur contribution is not related to effective intrinsic exploration in stochastic environments, or even generally robust conditioning choices for generative replay. We simply seek to make the case that online generative replay is a compelling paradigm for sample-efficient online RL. We rely on the insight that prioritized replay can be elegantly connected to, and easily instantiated via, conditional generation in generative replay. Finally, our mechanistic explanations do not rely on showing that the environment is better explored, which is a common underlying narrative in intrinsic motivation literature and the noisy-TV problem. We only demonstrate the possibility that PGR improves learning dynamics and reduces the prolific problem of early overfitting in Q-learning.\"}", "{\"metareview\": \"This paper uses generative models for samples for experience replay in RL. The paper uses conditional diffusion models, and explores different relevance functions, showing empirical improvements across experiments in online RL.\\n\\nReviewers agree that this paper is well-written, and the research problem is important and well-addressed. The method is novel and the results are strong, as well as the insights as to why the method works. \\n\\nReviewers had numerous concerns about the relevance function (eg robustness), which the authors addressed convincingly in their rebuttal (and reviewers have increased scores as a result). All reviewers agree that this work is above the acceptance threshold, with most agreeing it is a clear accept. Overall, this is a strong paper which will be a good addition to ICLR.\", \"additional_comments_on_reviewer_discussion\": \"A common question during the original reviews was about the choice of relevance functions. The authors added more results to tackle this and provide further insight, which the reviewers appreciated. The authors added an impressive amount of new results during the rebuttal period to address many of the reviewers' concerns.\\n\\nReviewer Yyu8 commented about using other generative models, but I agree with the authors that this comparison is not necessary / can make the messageworse. The authors still provided results with VAEs in the rebuttal. Overall Reviewer Yyu8's review is very short, but they are still positive about the paper. Similarly, Reviewer ir9Q's review is short, but raised some interesting questions that the authors addressed in the rebuttal. Reviewer ir9Q thinks the method is simple, but other reviewers did not agree.\"}", "{\"comment\": \"Thank you for the substantive comments and discussion. I have raised my score accordingly.\"}", "{\"title\": \"Response to reviewer ir9Q (1/2)\", \"comment\": \"Thank you very much for your review.\\n\\n**Q1:** \\u201cWhile the method shows improved performance, it is a bit simple as it combines existing elements in diffusion models and RL to propose the solution.\\u201d\\n\\n**A1:** **To the best of our knowledge, our work is the first to cast widely-used prioritized replay in online RL through the lens of conditional generation. Moreover, we provide a novel and central role to curiosity, as a useful prioritization signal during conditional data generation. This is a novel use case well outside its usual stage in RL methods, which have historically only leveraged it as intrinsic reward for better exploration.**\\n\\nThe novelty of our approach does not lie in using a diffusion model (conditional or unconditional) to capture the replay buffer. As we emphasize through lines 410-432 in Section 5.2 of the main text, knowing what to condition on is actually more important than choosing to do conditioning (or generation at all) in the first place. Moreover, why and how conditioning works are also equally valuable questions to answer, and we provide generalizable and non-obvious insights into both these elements.\\n\\n---\\n\\n**Q2:** \\u201cIs the method compatible with different kinds of exploration bonuses? If so, how do you think they would compare?\\u201d\\n\\n**A2:** **We show in our general response that the contributions in PGR go beyond explicit exploration bonuses such as intrinsic curiosity or RND [1], or methods which implicitly promote exploration such as bootstrapped DQN [4] or NoisyNets [5].**\\n\\nThese latter methods are readily compatible with PGR, requiring only light modifications to the underlying baseline REDQ architecture. We reproduce the results in this comparison below (experimental details can be found in Appendix B):\\n\\n| | Quadruped-Walk | Cheetah-Run |\\n|-----------------|------------------------|------------------------|\\n| NoisyNets | 688.29 $\\\\pm$ 65.55 | 770.14 $\\\\pm$ 70.53 |\\n| Boot-DQN | 721.49 $\\\\pm$ 39.82 | 754.56 $\\\\pm$ 67.38 |\\n| PGR (NoisyNets) | **939.32 $\\\\pm$ 36.74** | 893.67 $\\\\pm$ 43.47 |\\n| PGR (Boot-DQN) | 903.29 $\\\\pm$ 40.50 | **912.65 $\\\\pm$ 47.22** |\\n| PGR (Curiosity) | 927.98 $\\\\pm$ 25.18 | 817.36 $\\\\pm$ 35.93 |\\n\\nOur results indicate densification via PGR is orthogonal to and more useful than promoting exploration alone. For a more comprehensive discussion, we direct the reviewer to our general response above.\\n\\n---\\n\\n**Q3:** \\u201cHow do you think the method would do when simply having diverse samples does not imply usefulness? An example is the noisy tv problem.\\u201d\\n\\n**A3:** **We also show in our general response that PGR can effectively condition on relevance functions based on RND [1], pseudo-counts [2] or episodic curiosity [3], which all demonstrated reliably evading stochastic noise sources in prior work.**\\n\\nThe relevant discussion can be found under the first question in the above general response, with experimental details in Appendix A. We also direct the reviewer to our response to reviewer 1k48. Specifically, we argue that the search space for desirable relevance functions $\\\\mathcal{F}$ is not that large, and that prediction error-based intrinsic curiosity (ICM) serves as a good default choice. In the event that ICM is insufficient due to stochastic noise sources such as noisy TVs, this problem has been shown in [1] and [3] to be easy to diagnose, and we demonstrate in our general response above that swapping in a more appropriate $\\\\mathcal{F}$ restores the benefits of PGR.\\n\\nFinally, we remark that we do not generate diverse samples _because_ they imply usefulness. Rather, our first-order goal is to generate samples that are relevant and useful (e.g. reduce early overfitting of the Q-function.) And as a possible mechanistic explanation for this property, we empirically show that these samples are actually more diverse than those generated via the unconditional baseline.\"}" ] }
5IZfo98rqr
Decomposing The Dark Matter of Sparse Autoencoders
[ "Joshua Engels", "Logan Riggs Smith", "Max Tegmark" ]
Sparse autoencoders (SAEs) are a promising technique for decomposing language model activations into interpretable linear features. However, current SAEs fall short of completely explaining model performance, resulting in ``dark matter''—unexplained variance in activations. In this work, we predict and verify that much of SAE dark matter can be linearly predicted from the activation vector. We exploit this fact to deconstruct dark matter into three top-level components: 1) unlearned linear features, 2) unlearned dense features, and 3) nonlinear errors introduced by the SAE. Through a scaling laws analysis, we estimate that nonlinear SAE errors stay constant as SAEs scale and serve as a lower bound of SAE performance on both an average and per-token level. We next empirically analyze the nonlinear SAE error term and show that it is not entirely a sparse sum of unlearned linear features, but that it is still responsible for some of the downstream reduction in cross entropy loss when SAE activations are inserted back into the model. Finally, we examine two methods to reduce nonlinear error: inference time gradient pursuit, which leads to a very slight decrease in nonlinear error, and linear transformations from earlier layer SAE dictionaries, which leads to a larger reduction.
[ "Sparse Autoencoders", "Dictionary Learning", "Language Model Features", "Scaling Laws", "Mechanistic Interpretability" ]
https://openreview.net/pdf?id=5IZfo98rqr
https://openreview.net/forum?id=5IZfo98rqr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qaSSWrbMiU", "gPJQkq4Nbb", "JotaHDRFkd", "Ezf16LKlvL", "38zcCt6YBA" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730383173901, 1730049936074, 1731979776062, 1730689315236, 1731054086429 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3966/Reviewer_zGqo" ], [ "ICLR.cc/2025/Conference/Submission3966/Reviewer_MAo1" ], [ "ICLR.cc/2025/Conference/Submission3966/Authors" ], [ "ICLR.cc/2025/Conference/Submission3966/Reviewer_un5Q" ], [ "ICLR.cc/2025/Conference/Submission3966/Reviewer_SuAh" ] ], "structured_content_str": [ "{\"summary\": \"This paper tries to decompose the error components of sparse autoencoders (SAEs) to help better interpret language models. It uncovers that the SAE error can be predicted and analyzed, and provides insights for reducing it.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper focuses on the commonly ignored detail\\u2014SAE error\\u2014to provide more insights for thoroughly understanding a research subject related to the representation of language.\\n2. It examines SAE error from multiple aspects, such as scaling law, norm prediction test, etc.\\n3. It investigates ways to reduce NonlinearError in detail.\", \"weaknesses\": \"# Major\\n\\nTo be honest, I had a very hard time understanding this paper. Specifically,\\n\\n1. How do you define the term \\u201cmost common\\u201d (features..) on line 146? It is ambiguous. \\n2. Section 4.1 is not well written, which makes me hard to understand the later sections of the paper. Specifically:\\n 1. Line 193 and 194 do not make sense. For instance, Dense is nonlinear, how can the sum between Wx and Dense be linear component of the error?\\n 2. What leads you to make the statement on line 200 to 201? You don\\u2019t even know what the \\u201ctrue features\\u201d are. Please elucidate. \\n 3. It is questionable to claim \\u201cthe percent of variance left unexplained by the regression will be an upper bound on the true variance explained by NonlinearError(x).\\u201d First, please provide a clear definition of \\u201cvariance explained or unexplained\\u201d, perhaps in the appendix. Second, if this is a concept similar to the explained variance in PCA, I\\u2019m not sure if you can readily extrapolate that to nonlinear components. Some rigorous derivation is needed. If you are not sure, you need to stress this is an assumption.\\n 4. Here the so called synthetic setup is just for confirming that the *SAE is approximately an identity function on its image, i.e., the linear subspace of* $\\\\mathbf{x}'$. There is no need for this long verbosity.\\n 1. Also, what is the motivation for this? I think there should be more clarification.\\n 5. Linear transformation $\\\\mathbf{a}$ (line 161) should be denoted by $A$ for consistency (since you have $W\\\\mathbf{x}$). You current notation makes it look like evaluating inner product. I don\\u2019t think this is the same as the one in Section 4.2, right? Also, clarify its output dimension.\\n 6. The Gaussian noise simulation on line 204 to 207 is kinda questionable. \\n 1. The set of $\\\\mathbf{x}$ could be a curved low dimensional manifold. In this case the Gaussian noise\\u2014whose support is a practically a ball of the same dimension as $X$\\u2014cannot accurately simulate Dense(x). \\n 2. The output of Dense and NonlinearError could be correlated in reality. \\n 3. The functions Dense and NonlinearError could be continuous, so Gaussian noise may not accurately simulate them.\\n 7. Since SAE is approximately an identity function according to you (line 204 to 205), we can safely assume that $\\\\mathbf{x}\\u2019$ and SAE($\\\\mathbf{x}\\u2019$) are identical. Then adding Gaussian noises to each of them simply makes the simulated SaeError (i.e., $\\\\mathbf{x}$ - SAE($\\\\mathbf{x}$)) an Gaussian noise. I\\u2019m not sure how using a linear map $\\\\mathbf{a}^\\\\top \\\\mathbf{x}$ can derive the results in Fig. 2, or maybe I just got it completely wrong since I can hardly understand your writing. I think you might need to elaborate on line 202 to 207.\\n\\nI\\u2019d like to stop here since Section 4.1 has already baffled me enough, making me impossible to review the later sections. I think some revision is needed to streamline the narrative. \\n\\n# Minor\\n\\n1. On line 135, I think you mean $\\\\|\\\\mathbf{w}\\\\|_0\\\\ll d$.\\n2. What is called the \\u201clinear subspace of $\\\\mathbf{x}$\\u201d (line 187)? $\\\\mathbf{x}$ is just a point in the activation space $X\\\\ni\\\\mathbf{x}$. If you mean a linear subspace of $X$, then is it *proper*, i.e., it cannot be $X$ itself?\\n3. Use the correct citation format \\\\citep and \\\\citet. For instance, line 35.\", \"questions\": \"Check the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper attempts to decompose the reconstruction error of sparse autoencoders into different interpetable components: (1) unlearned linear features, (2) residual dense features, and (3) nonlinear error introduced by the SAE. The authors explain their decomposition and then proceed to try to measure various components of the errors. They attempt to measure the linear and nonlinear portions of the error by training a linear regression on the activation vectors to predict the SAE recontruction error. They claim to find some irreducible nonlinear error arising from the SAE which does not disappear as the number of parameters in the SAE increases.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The core idea to decompose the SAE reconstruction area into various parts is very interesting and, as far as I am aware, novel. Having such a decomposition could potentially be very useful for training better SAEs and informing the debate around the extent to which the linear representation hypothesis holds.\", \"The authors attempt to measure these quantities using a wide range of experiments. They also study the downstream effects of each component of their decomposition, and investigate methods for reducing the nonlinear error. Overall, I think the breadth of their empirical work is good.\"], \"weaknesses\": [\"The main weakness of this paper is that the central quantities are not defined clearly enough for me to be able to properly understand what is going on, or the adequacy of their empirical experiments. My current understanding, rephrased in my own language is that the authors are making the following decomposition:\", \"Activation vector $x$\", \"The sparse autoencoder reconstruction: $\\\\textup{Sae}(x)$\", \"The error of the SAE reconstruction: $\\\\textup{SaeError}(x)$\", \"Unlearned sparse features: $\\\\sum_{i=m}^n w_i y_i$\", \"Dense feaures which will not be learned by the SAE: $\\\\textup{Dense}(x)$.\", \"Additional error: (I don't know if the authors have a term for this)\", \"Linearly predictable additional error: $Wx$\", \"Nonlinear error: $\\\\textup{NonlinearError}(x)$\", \"(where my notation is that each bullet point is the sum of the bullet points nested at one level below it). This was my understanding after having spent quite some time re-reading Section 3, so I'll proceed as if this is correct, but I'm not confident and I'd appreciate clarification from the authors.\", \"Assuming the above, I have several comments:\", \"The exposition in Section 3 needs to be substantially clearer for readers to be able to understand your definitions. For example, you should name the error mentioned in \\\"the SAE introduces some error when making this approximation\\\" and give it a consistent name. I also think equations (3) and (4) are misleading. According to my understanding, (4) is the central definition decomposing the SAE error and (3) is a consequence of (4) when you consider subtracting from $x$.\", \"As an aside, the introduction of $W$ is poorly motivated here. Also, later on you're going to consider another sense in which the error can be linearly predicted from $x$ - namely using $a^T x$ to predict $\\\\textup{SaeError}(x)$. I believe that these are not the same, but I was initially confused here and I don't understand the difference in intuition between them.\", \"In Figure 1, I don't see why Dense Features and Linear Error should be grouped together. They seem like quite different quantities with different interpretations to me.\", \"The changes of notation and setup in the first paragraph on page 6 are very unhelpful given there's already a lack of clarity. I'd strongly recommend that the authors stick to consistent notation throughout the paper, and choose a single - clearly distinct - term and corresponding notation for each component of their decomposition and stick with it throughout the paper.\", \"As an aside, if we're dropping $Wx$ in this paragraph, why did we have it in the first place? I'm not sure I ever understood the intuition for having it.\", \"Since you are agnostic to the SAE architecture, you don't introduce any notation for the features that the SAE learns. This lead to me getting confused and originally not understanding that the SAE reconstruction error comes from essentially four places: learning wrong features, learning wrong feature weights, unlearned linear features, and (unlearned) dense features. My understanding is that the nonlinear error is effectively measuring the first two. Is that right? If so, a clarifying comment in this direction might be helpful.\", \"Secondly, I did not understand the description of and intuition behind the experiment in the first paragraph of page 4. I think this might be downstream of the fact that I haven't completely understood the authors' decomposition of the SAE error. But in any case, either the authors need to offer a clearer definition of the quantities they are working with, or more intuition for why this experiment claims to measure what they are hoping - and probably both.\", \"Given this lack of clarity in a couple of key places in the manuscript, it was hard for me to engage with the more detailed experimental results in Sections 5 onwards, since I couldn't understand what the authors were actually hoping to measure. The problem that the authors are trying to study seems fundamentally very interesting, and I'm optimistic that some version of this paper could be very solid, but as it stands it's not possible to appreciate the authors' contributions.\"], \"more_minor_points\": [\"I think the synthetic experiment setup in Section 4 is unlikely to be particularly realistic - particularly the assumption of Gaussian noise. (My read of Gurnee (2024) which the authors cite regarding pathological reconstruction errors makes the Gaussian assumption likely incorrect.) But, this is not a central concern.\", \"I suggest that the authors read over their manuscript for minor typographical errors. Ones that I caught include: issues with citation formatting in several places, summation indices in equations (3) and (4) being incorrect, and several stray commas.\"], \"questions\": \"Combined with previous section for clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We greatly appreciate the reviewers comments and suggestions!\\n\\nIt seems to us that the reviewers all agree that we have interesting findings, but that we need to clarify many parts of the paper and perhaps run a broader set of experiments. Thus, we have decided to withdraw our work and improve it in these aspects. \\n\\nThank you all again so much for your time and expertise.\"}", "{\"summary\": \"This paper analyzes sparse autoencoder (SAE) errors in language models and found they could decompose these errors into three parts: unlearned sparse linear features, a dense linear term and a \\\"nonlinear error\\\" term that persists even as SAEs get larger. The paper studies the nonlinear error across different token posistions and SAE widths, with experiments on Gemma-2 9B. The paper attempts to reduce this nonlinear error through two methods: using gradient pursuit during inference (which only slightly helped) and leveraging SAE reconstructions from adjacent model components.\\n\\nI found this paper complex, and while I think it is flawed in the current state I am happy to revise my opinion if my concerns are addressed.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"I think that the biggest two problems with Sparse Autoencoders and Mechanistic Intepretability research is i) faithfulness of decomposition of models and ii) real-world application of insights. This paper is solid progress on i) because they ask why SAEs are currently limited.\", \"The paper describes a sensible theoretical decomposition of model activations into SAE learned features, dense features and non-linear features, and also measures these terms in practice.\", \"The paper does a wide range of analyses: automated interpretability, FVU and loss measurements.\"], \"weaknesses\": \"1. **Large sections of the paper are difficult to understand**\\n\\na. Some notation is defined in strange ways that require me to reread them many times. E.g. `SaeError(x) := x - Sae(x)` is defined with a -1 coefficient for `Sae(x)`, but `NonlinearError(x) := Sae(x) - Wx - \\\\sum_{i=0}^{m} w_i \\\\vec{y}_i` is defined with a +1 coefficient for `Sae(x)`. I *think* that most of these problems are downstream of defining some parts of the notation with an end state in mind, e.g. the weak linear representation hypothesis suggests that there exists sum ideal dictionary decomposition of the vector, but other parts of the notation is defined with the current state in mind, e.g. you slot `SaeError(x)` into the sum with the existing `Sae(x)` and `x` terms and nothing else. However, I'm not confident that this consistently considering solely the end state or current state is either necessary or sufficient for making the notation clearer.\\n\\nb. Some statements do not make sense after several times re-reading. E.g. \\\"The intuition behind this test is that if ... its existence is not guaranteed\\\".\\n\\nc. \\\"If this test is accurate, we can use it to estimate the linear component of the error, `Wx + Dense(x)`\\\" but `Dense(x)` was introduced as possibly non-linear, and `Wx` is non-linear, so how is this the **linear** part of the error?\\n\\n2. **I think the conclusions are too strong**.\\n\\nThe paper states \\\"We also find that the norm of the `NonlinearError(x)` is constant on a per token level as\\nwe scale SAE width\\\", and while this section changes the definition of `NonlinearError` to be the error as determined by their method, in the conclusion the paper states \\\"... the presence of constant nonlinear error ...\\\" with no hedging.\\n\\nI am not convinced that the methods in the paper are capturing *true* linear error and non-linear error. One reason this may be happening is that all the training on the error term `x - Sae(x)` may involve some vestigual `x` due to shrinkage (which could be boosted by 10x by the various predictors, since the appendix suggests there is 10% shrinkage). This would mean the various methods that predict linear or nonlinear error may be cheating and picking up on this shrinkage. Is this addressed in the paper? Why is there no hedging in the conclusion that the methods presented may not capture true (non-)linear errors, or even that the weak linear representation hypothesis may not be true. For example, in the extreme case where the SAE was entirely dead, we can predict `SaeError(x) = I * x` and so the SAE solely has linear error, which suggests there are some assumptions that need to be stated (that the SAE is sufficiently good at reconstructing I think).\", \"questions\": \"1. Why is it necessary to sanity check that estimated linear errors and estimated non-linear errors are correlated in Section 4.1? I found this experiment technical and in-the-weeds and couldn't tell why it needed be done (the interesting parts of the paper are with the actual LLM and actual error).\\n\\n2. Why are all the linear extractors single linear matrices, or another SAE? Shouldn't an SAE with no sparsity penalty be applied somewhere so it's possible to learn features in superposition without interference (not possible with a plain linear matrix) and features that are dense (not possible with a sparsity penalty)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is an analysis into the error of sparse autoencoders applied to LLM interpredability. They address the shortcoming of the ability to reconstruct the hidden state, and observe that scaling laws show that an SAE would not be able to fully represent the hidden state, even in the limit. The theoretical framework is applied to decomposing the error of one layer of the Gemma 2 open source LLM, using the open source SAE published in a separate work, which agrees with the breakdown.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper\\u2019s analysis is very thorough and a very compelling decomposition of the scaling of sparse autoencoders for feature extraction. The observations are very well described, and the theory and methodology is sound. I think it would be an impactful work, however, it does have one fatal shortcoming: (see below)\", \"weaknesses\": \"Weaknesses\\nThe main weakness is that the method is only applied in one model, and even worse, only one layer in the model. The methodology is self described as empirical, but because only one context is analyzed, the findings are not sufficiently proven because the paper essentially shows only one data point. That is, is it possible that this only happens in this one layer for this one model? I would not think that the findings are exclusive to just this one context, but would it be possible to construct a LLM architecture for which this result does not hold? It should be easy to repeat the analysis for various open source models and different layers, and to thus illustrate that the observations and theory hold more widely. With just more demonstrations of the same trends across more models, the paper would be strong.\\n\\n\\nAnother easy to correct shortcoming in the presentation is that details of the autoencoder training process are left out. There is a citation to GemmaScope, but given that this paper is trying to make a broader theoretical claim, a short description of the SAE training process would be apt. Similar to applying the method to more models and layers, it would also be interesting to verify if different SAE algorithms resulted in features that change the results.\", \"questions\": [\"Why was Gemma analyzed? Why was Gemma Scope chosen? What is special about these choices? (Availability as an open source model with a published SAE is an acceptable answer, but it that should be stated in the paper that the choice was made out of convenience.)\", \"Did you train the SAE from scratch, or are you using a the published checkpoint?\", \"What type of sparse autoencoder is considered? How was it trained? (As mentioned above, briefly describe the technique if you are using a published checkpoint.)\", \"Figure 1:\", \"What are the equations in the labels supposed to mean?\", \"Where did the data come from?\", \"Exactly which part of the figure is \\u201cdark matter\\u201d?\", \"Equation 1:\", \"is the vector w \\u201crandom\\u201d or is it actually a coordinate of x in the y basis?\", \"Is ||w||_1 << d the right equation? Couldn\\u2019t one of the components be much larger than one?\", \"Line 196: What is the set up of the synthetic set up?\", \"What is L_0?\", \"Line 242 --The exact set up of the random vectors here was unclear.\", \"What is Figure 3b supposed to show?\", \"Line 240: Exactly what is \\u201clikely\\u201d supposed to mean?\", \"Line 264: Bibliography: Gemma reference has a broken last name: \\u201cTeam\\u201d\", \"Figure 4: The caption is not a full sentence and not clear. Expand the caption.\", \"What does the last equation on Line 490 say?\", \"Appendix A:\", \"This section needs a little more introduction: What is the proof trying to show, and what does it imply?\", \"Line 697: Why is the case \\\\lambda=1 special, and what do we conclude from rho=0.73 ?\", \"Is d > m?\", \"Line 662: define WLOG\", \"Appendix B: report the results in the appendix, instead of just qualitatively describing them.\", \"Does Appendix C have a reference in the main text?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5IWJBStfU7
Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?
[ "Maxime Méloux", "Silviu Maniu", "François Portet", "Maxime Peyrard" ]
As AI systems are increasingly deployed in high-stakes applications, ensuring their interpretability is essential. Mechanistic Interpretability (MI) aims to reverse-engineer neural networks by extracting human-understandable algorithms embedded within their structures to explain their behavior. This work systematically examines a fundamental question: for a fixed behavior to explain, and under the criteria that MI sets for itself, are we guaranteed a unique explanation? Drawing an analogy with the concept of identifiability in statistics, which ensures the uniqueness of parameters inferred from data under specific modeling assumptions, we speak about the identifiability of explanations produced by MI. We identify two broad strategies to produce MI explanations: (i) "where-then-what", which first identifies a subset of the network (a circuit) that replicates the model's behavior before deriving its interpretation, and (ii) "what-then-where", which begins with candidate explanatory algorithms and searches in the activation subspaces of the neural model where the candidate algorithm may be implemented, relying on notions of causal alignment between the states of the candidate algorithm and the neural network. We systematically test the identifiability of both strategies using simple tasks (learning Boolean functions) and multi-layer perceptrons small enough to allow a complete enumeration of candidate explanations. Our experiments reveal overwhelming evidence of non-identifiability in all cases: multiple circuits can replicate model behavior, multiple interpretations can exist for a circuit, several algorithms can be causally aligned with the neural network, and a single algorithm can be causally aligned with different subspaces of the network. We discuss whether the unicity intuition is necessary. One could adopt a pragmatic stance, requiring explanations only to meet predictive and/or manipulability standards. However, if unicity is considered essential, e.g., to provide a sense of understanding, we also discuss less permissive criteria. Finally, we also refer to the inner interpretability framework that demands explanations to be validated by multiple complementary criteria. This work aims to contribute constructively to the ongoing effort to formalize what we expect from explanations in AI.
[ "AI interpretability", "mechanistic interpretability", "causal consistency", "explanatory algorithms", "circuits" ]
Accept (Poster)
https://openreview.net/pdf?id=5IWJBStfU7
https://openreview.net/forum?id=5IWJBStfU7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zP28q2duDD", "xLjcEz90AX", "wKxMjuewbA", "qYoWK33vbq", "nUcMas9KAB", "mpZmigMNWF", "XEtxjQ8iWM", "UAh8Vxq0UR", "Mk80MDqZyS", "Kect8nhlWt", "DhzAn5XqIm", "CYiLoQZmzP", "CFpE6yGnhs", "8tqmokZu2q", "7RGjO3mhR4", "6ngOcAPcg8", "6YKmyY2gRG", "5s5o9XhrOH" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732547448109, 1732805514094, 1730484406610, 1732269903805, 1732522855944, 1732522574841, 1730456369882, 1732101781422, 1732522830142, 1737524060787, 1732317011981, 1732102102360, 1730578522673, 1734490188263, 1732805491803, 1732102025245, 1732101833352, 1730660820078 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_S8oD" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_S8oD" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_S8oD" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_Q4uc" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_J9MJ" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_J9MJ" ], [ "ICLR.cc/2025/Conference/Submission10546/Area_Chair_fr7j" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Authors" ], [ "ICLR.cc/2025/Conference/Submission10546/Reviewer_ExBt" ] ], "structured_content_str": [ "{\"title\": \"No further comments\", \"comment\": \"Thanks. No further comments at this point.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe would like to express our gratitude for the thoughtful and thorough review of our manuscript. The detailed feedback has been valuable in improving the overall quality of the paper. We also appreciate your suggestions regarding computational complexity and related references.\\n\\nWe are confident that the revisions we have made reflect your input and contribute to a clearer and more robust manuscript. If we have addressed your concerns, we would be grateful if you might consider revisiting your score.\\n\\nThank you once again for your constructive feedback.\"}", "{\"summary\": \"The authors investigate the potential issue of identifiability in mechanistic interpretability through experiemnts on small MLPs where (isolated) circuits are ennumerated and assessed fairly exahustively. They find identifiability is an issue at all levels: in the number of subcircuits functionally aligned with the full network, in the number of algorithms consistent with the behavior, and in the mappings between algorithms and circuits. This problem gets worse as architecture size increases, and training on a greater number of tasks only mitigates this issue to some extent.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Well written\", \"Well referenced\", \"Adresses an issue of interest to the interpretability community\", \"Provides exhaustive experiments with well-understood ground truth.\", \"Investigates the effects of various variables (architecture size, number of tasks, noise) on the identifiability problem.\", \"This kind of study is very much needed.\"], \"weaknesses\": \"I first summarize the most important points and then elaborate.\\n\\n* It is unclear whether the conclusions about identifiability problems can be stated generally or only in particular for circuits as isolated objects (circuit discovery through zero-ablation), which might mischaracterize the functioning of the network as a whole.\\n* The possible improvements advertised in the abstract/introduction are rather only sketched in section 5.1.\\n\\n\\nIs the large number of circuits/possible explanations due to looking for circuits in isolation (e.g., via zero ablation) rather than working in context with the rest of the network (e.g., via activation patching)?\\n\\nLine 320 describes the circuit isolation procedure. This is equivalent to zero-ablation and the criterion is equivalent to the definition of suficcient circuit.\\nHow would identifiability look like if we chose to define circuits as they function in the context of the full network? See for example the definition of circuit via activation patching in Adolfi et al., 2024.\\n\\nIsn\\u2019t it possible that many of the isolated circuits discovered through zero-ablation are mischaracterizations of the in-context functioning of the circuits as they are embedded in the full network?\", \"line_080\": \"\\u201ca model\\u2019s behavior should have a single, well-defined explanation\\u201d. There is no citation here and it is unclear where this intuition is coming from, what is it\\u2019s theoretical support, etc. To offer a counter-intuition: consider a circuit that is sufficient on its own to mimic the behavior of the full network over some input domain; such a circuit need not be unique. Trivially, the circuit plus additional neurons to form the full network is another such circuit. But there is no contradiction in intuiting that multiple such circuits of different sizes, with partial or no overlap exist in the network and, in principle, offer alternative (perhaps incompatible?) \\u2018explanations\\u2019 (see Adolfi et al. 2024 for theoretical analyses).\", \"on_line_091\": \"the authors mention \\u201cthe identifiability properties of current MI criteria\\u201d. The criteria of interest that define circuits leave open the possibility that these circuits are not unique. So the definition of these circuits does not preclude non-identifiability unless the uniqueness property is trivially appended to the definition. This leads one to suppose that uniqueness under the typical definition of circuits is a property left to be determined empirically. It could, in principle, be motivated theoretically, but I see nothing in that direction here. Is it possible to provide some theoretical motivation for uniqueness that is not trivially stipulated but justified from first principles?\\n\\nIf a network implements the same functionality for different inputs through different circuits and algorithms, does this really make mechanistic interpretation hopeless? (i.e., in this case, is only a functional explanation capable of unifying all the existing \\u2018explanations\\u2019?). It would be useful to have any assumptions about satisfactory explanations made explicit in the manuscript.\", \"line_489\": \"\\u201cthe challenge lies in defining criteria that distinguish valid explanations from misleading ones.\\u201d\\nIt seems to me that, conceptually, identifiability does not pose a problem for distinguishing misleading from valid explanations. The problem arises only if an explanation is presented as unique or valid for a full input domain when this is not so. This issue might warrant some clarification.\", \"line_490\": \"\\u201cAccording to MI, the explanatory algorithm should be unique, meaning multiple competing explanations should not exist.\\u201d But this statement is made without citation. This assumption seems ill-founded to begin with, for the reasons mentioned above. Where does the criterion come from?\", \"questions\": \"Minor comments, questions, and suggestions:\\n\\nLine 403 states that \\u201clarger architecture\\u2026could also lead to greater overparameterization\\u201d. This could benefit from elaboration; in particular, how larger architecture could lead to a reduction in the number of valid abstractions.\\n\\nOn Line 065, \\u201cGiven the near impossibility of exhaustively searching all possible algorithms across all subsets of a neural network\\u201d, I might suggest to reframe this not as impossibility but as intractability, infeasibility or implausibility. Certain interpretability queries might have large search spaces that could nevertheless be searched efficiently. The relevant property is the complexity of the interpretability query, not merely the size of the search space. For computational complexity analyses of circuit discovery, see Adolfi et al 2024.\\n\\nOn Line 067, the authors state \\u201cresearchers have developed approximation methods with different assumptions and trade-offs\\u201d. It seems to me that the circuit discovery methods that are typically developed are heuristics for circuit finding, not approximation algorithms with any proven guarantees. In any, case it would be useful if the authors can distinguish between these two categories in their descriptions.\\n\\nCitation to Van Rooij on Line 046 does not seem to fit with the corresponding sentence, as that paper does not deal at all with interpretability, as opposed to Lindsay, 2024, which is indeed an approapriate citation. For examples of studying the fundamental properties of (inner) interpretability queries see Adolfi et al., 2024.\\n\\nSection 2.1 mentions interpretability work on transformer models but only in language. An example from vision transformers can be found in Vilas et al. 2023.\\n\\nPlease clarify the notation in Definition 4.\\n\\nLine 229 makes an implicit statement about computational complexity but provides no citation. See Adolfi et al. 2024 for relevant complexity analyses. This is also relevant to the statement on Line 257. Here it would also be useful to clarify how uniform random sampling \\u201capproximates\\u201d the desired measure, as this seems non-obvious. Perhaps the authors mean random sampling is a heuristic with unknown properties?\\n\\nLine 494 states that current MI methods can only approximate their targets because exhaustive enumeration is impossible for large models. This is technically incorrect, as even for some problems with exponential serach spaces, efficient search algorithms that find optimal solutions are possible. The relevant notion is the computational complexity of the interpretabililty queries, not simply the size of their search space (see Adolfi et al., 2024).\\n\\nSection 2.1 describes a parallel between AI interpretability and neuroscience. A framework that draws from lessons grounded in this parallel is described in Vilas et al. 2024. This framework provides a nice embedding for the what-where distinction, corresponding to the algorithmic and implementational levels, respectively.\\n\\nThe problem of identifiability interacts in interesting ways with the computational complexity of circuit finding. Adolfi et al. 2024 analyses circuit queries that are relevant to the authors\\u2019 points on identifiability. See, for instance, counting problems which ask for the number of circuits in a train neural network that have a certain property (e.g., they are sufficient for a behavior). Furthermore, if the number of sufficient circuits is typically large, heuristics for otherwise intractable problems (e.g., sufficient circuit) could seemingly find their targets in a feasible amount of time. In this scenario, non-identifiability is an important catch.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thanks for the clarifications/changes. I have no further comments at this point except this:\\n\\nThe intractability I was referring to above has to do with the complexity status of interpretability queries (the intrinsic hardness of certain computational problems that formalize circuit discovery), not to the (more obvious) time complexity of enumerating an exponential number of circuits. The statements in the manuscript about \\u201cintractability of enumerating\\u201d should probably be amended/supplemented to reflect this, since the more relevant notion here is the intractability of problems, not the complexity of enumerating (which is merely one possible approach to problems).\\n\\nFor reference, see e.g.:\", \"https\": \"//doi.org/10.48550/arXiv.2410.08025\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWith the author-reviewer discussion period ending soon, we would like to send a gentle reminder that we have submitted the rebuttal to address the given comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period.\\n\\nWe thank you again for taking the time to review our work.\"}", "{\"comment\": \"Thank you for your positive assessment of our answer.\\n\\nIn line 49, our usage of \\\"intractable\\\" refers to the time complexity of enumerating all possible candidate circuits and it is its intended usage.\\n\\nHowever, we mention the fundamental intractability discussed by [1] and [2] about the intrinsic computational hardness of interpretability queries in section 5.2 regarding the possibility that MI is fundamentally underdetermined. We have updated the sentence citing these articles to better emphasize this aspect, it is now:\\n\\\"The intrinsic computational hardness of interpretability queries (Adolfi et al., 2024a;b) suggests that MI may have fundamental limits, leaving it possibly underdetermined.\\\"\\n\\nThank you again for the constructive discussion. Let us know if there are additional ways in which we can further improve the paper.\"}", "{\"summary\": [\"The objective of the paper is to answer the question, do current criteria in mechanistic interpretability (MI) guarantee the identifiability of the explanation?\", \"Authors sort MI methods into two broad strategies:\", \"_where-then-what_ focuses on finding a subset of the network \\u2013 a circuit \\u2013 that captures most of the information flow from in- to outputs. Once this circuit is identified, the next step is to interpret its components (features) to derive the explanatory algorithm.\", \"_what-then-where_ starts by identifying candidate algorithms and then searches subspaces in the neural network where the algorithm may be implemented, using causal alignment between the explanatory algorithm\\u2019s states and the network\\u2019s internal states.\", \"They stress testing both methods with toy models: small MLPs trained on logic gates. They performed three main types of searches to test different interpretability criteria:\", \"Circuits search: Looking for subnetworks that perfectly replicate the model's behavior\", \"Interpretations search: Trying to map neurons to logical gates in a way that's consistent with their activations\", \"Mappings search: Testing different ways to map logical gates to groups of neurons\", \"Key findings in toy models: they found multiple valid interpretations for the same network.\", \"85 different circuits that achieved perfect accuracy\", \"An average of ~536 possible logic gate interpretations per circuit\", \"159 perfect minimal mappings between algorithms and neurons\", \"In total, over 45,000 different possible computational abstractions\", \"Experiment larger NN trained on a subset of MNIST revealed similar dynamics\", \"The circuit search found over 3000 valid circuits\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors introduce their own taxonomy and formalization (e.g. where-then-what/what-then-where, Circuits, Mappings, etc) for important concepts discussed in the paper. While I haven\\u2019t fully wrapped my head around the usefulness of the taxonomy, I appreciate the effort to deconfuse different interpretability methods. I think this is the strongest suit of the paper and I wish they had focused the paper on the taxonomy and less on the experiments.\", \"I enjoyed the writing style, and it was easy for me to follow, particularly Sections 2 and 3. I also found Figure 2 to be helpful. For the _what-then-where_/_where-then-what_ split.\", \"The paper touches on an important topic within interpretability \\u2013 a lack of quality in the discourse around helpful metrics for interpretations.\"], \"weaknesses\": [\"While I think the taxonomy could bear some relevant fruit, for experimentally testing the metrics using NN this small, it seems challenging to generate insights that are relevant to interpretability as a total. From my understanding, we are not close to having high-confidence circuit interpretations (\\u201cperfect circuits\\u201d) in the first place, so working under this assumption might be several steps ahead.\", \"Continuing on the larger NN experiments: I struggled to understand the points in line 479 ff. What is in this case the \\\"valid circuit\\\"? I hoped this section could have bridged a gap toward showing how this metric could be used in the future but failed to show its limitations in this instance. (But maybe I simply oversaw that.)\", \"Lastly, I think there is also too little emphasis on existing literature that clearly touches on the underlying problem: disentanglement. I would consider modeling the paper around the taxonomy and then focusing on existing research and problems in relation to known circuits, such as IOI.\"], \"questions\": \"(From the weakness section)\\n- What is in the case of the MNIST NN the \\\"valid circuit\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reponse to the reviewer\", \"comment\": \"We sincerely thank the reviewer for their time, insightful comments, and kind remarks about the clarity and interest of our work.\\n\\n__On the definition and measurement of \\u201cincompatibility\\u201d:__\\nWe have expanded the definition of incompatibility in the manuscript (section 2.4). We broadly define fully incompatible explanations as pairs of explanations that share the same epistemological goal but lack overlap in location (within the neural network, the \\\"where\\\") and/or differ in internal states (e.g., algorithms, the \\\"what\\\"). \\nWe agree that compatibility can also be partial, but this is out of the scope of this work and we leave open formal definitions and measurements of compatibility for future research. \\nOur study primarily focuses on at least partially incompatible explanations (i.e., no full overlap). The number of incompatible explanations that we find is particularly large and examples of fully incompatible explanations have been added in Appendix B.\\n\\n__On the distribution of the training set:__\\nWe have added an experiment where the training procedure is repeated with varying training distributions. The results, in Appendix C.5, suggest that training biases do not significantly affect the conclusion.\\n\\n__On training error and overfitting:__\\nWe have expanded our previous experiment on training dynamics, in which we analyzed scenarios where training was stopped at different loss cutoffs. The manuscript now includes additional figures to that extent (Appendix C.4). Our findings indicate that approaching perfect training error does not substantially alter the number of explanations found. Interestingly, many incompatible explanations can also be identified in randomly initialized (untrained) networks that just happen to implement the target logic gate.\\n\\nFinally, the source code used for our experiments will be released upon publication.\\n\\nPlease let us know if we can provide any additional information to clarify our work.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWith the author-reviewer discussion period ending soon, we would like to send a gentle reminder that we have submitted the rebuttal to address the given comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period.\\n\\nWe thank you again for taking the time to review our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Good responses, maintaining score\", \"comment\": \"Thank you for clarifying that the what-then-where approach does not rely on DAS. Admittedly, as I'd reflected over the last few weeks I'd been troubled by the apparent reliance on DAS, as activation patching can trivially score high on the IIA if enough dimensions are patched. I was going to lower my score accordingly, but your rebuttal helped clarify that this method does not rely actually rely on patching.\\n\\nI will maintain my score.\"}", "{\"title\": \"Response to the reviewer\", \"comment\": \"We would like to thank the reviewer for their comments and for appreciating our distinction between \\\"what-then-where\\\" and \\\"where-then-what\\\" approaches to mechanistic interpretability. Below, we address potential misunderstandings and clarify our contributions.\\n\\n> What is in the case of the MNIST NN the 'valid circuit'?\\n\\nIn the MNIST experiment, like all our experiments, there is no concept of one single \\\"valid circuit.\\\" The field of mechanistic interpretability proposes various criteria to define what constitutes an accepted explanation. Our experiments do not require the notion of a \\\"true\\\" or \\\"valid\\\" circuit, but only test whether the criteria induce a unique accepted explanation. \\nSpecifically, we show that the existing criteria suffer from identifiability issues, they lead to multiple incompatible circuits that are all equally plausible according to the criteria. It is important to note that we are not testing specific interpretability methods; rather, we are testing the criteria underlying those methods. Furthermore, we are not proposing new metrics in this work. Instead, we provide counterexamples illustrating that existing criteria fail to yield a unique solution (a unique explanation). This raises foundational questions about the assumptions and objectives of mechanistic interpretability that we discuss at length in Section 5.\\n\\n> It seems challenging to generate insights that are relevant to interpretability as a total.\\n\\nOur focus is on testing the interpretability criteria themselves. These criteria are meant to be applicable across contexts, including small neural networks. By focusing on small neural networks, we can enumerate all possible explanations exhaustively and demonstrate counterexamples that challenge current criteria. Such stress testing is intractable at scale, but our experiment with a larger NN on MNIST shows a lower bound on the number of circuits proving the problem remains at this scale.\", \"our_work_is_stress_testing_the_foundations_of_our_field_with_respect_to_one_property\": \"identifiability. Our work seeks to contribute constructively to the field by questioning the underlying definitions of what is a valid explanation. A clearer understanding of what constitutes an explanation will ultimately guide the development of more reliable methods for generating them.\\n\\n> \\\"Too little emphasis on existing literature that clearly touches on the underlying problem: disentanglement.\\n\\nOur work does not target specific methods for finding explanations or critique particular instances of circuits previously identified using these methods (like IOI). Instead, we address a more foundational question: whether the criteria used to evaluate explanations reliably induce a unique solution (i.e., a unique explanation). Through extensive empirical evidence, we demonstrate that current criteria fail in this regard. While disentanglement-related literature provides valuable insights, our primary aim is to stress-test the criteria themselves, not to evaluate or propose methods based on disentanglement.\\n\\nFinally, the source code used for our experiments will be released upon publication.\\n\\nWe hope this response clarifies our contributions and addresses any misunderstandings. Our work is intended as a constructive step toward more rigorous definitions and criteria for explanations, which we believe will ultimately strengthen the field of mechanistic interpretability. Please let us know if we can further clarify our contributions.\"}", "{\"summary\": \"This paper investigates the \\\\emph{identifiability} of mechanistic explanations from interventional evaluations on toy neural networks. The authors find clear non-identifiability at multiple stages of the interpretability pipeline: multiple interpretations can exist for a circuit, multiple circuits can generate the same behavior, each algorithm can be aligned to multiple activation subspaces, etc.\\nThis non-identifiability persists regardless of whether MI explanations are generated first by localizing a subset of the network, then deriving an interpretation, or first generating a candidate algorithm and trying to then find an activation subspace corresponding to that algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"A rare case of compelling and relevant deconfusion experiments which doesn\\u2019t devolve into over-claims. The authors do well in the discussion highlighting that identifiability may not be needed for all applications of mech interp, and that the non-identifiability observed here is on toy-models, so might not extend to larger models trained on multiple tasks.\\n\\nThe \\u201cwhat-then-where\\u201d strategy implemented here appears to solely utilize an approach based on Distributed Alignment Search (DAS), which makes the results about non-identifiability quite relevant to recent discussion:\\n\\nMakelov, A., Lange, G., & Nanda, N. (2023). Is this the subspace you are looking for? An interpretability illusion for subspace activation patching. arXiv. https://arxiv.org/abs/2311.17030\\n\\nWu, Z., Geiger, A., Huang, J., Arora, A., Icard, T., Potts, C., & Goodman, N. D. (2024). A reply to Makelov et al. (2023)'s \\\"Interpretability Illusion\\\" arguments. arXiv. https://arxiv.org/abs/2401.12631\", \"weaknesses\": \"Primary weakness: The most novel contribution of this paper are its experimental results, which don\\u2019t have enough description: Appendix B only contains aggregated results about the number of circuits and average interpretations per circuit found, but e.g. lacks examples of said circuits for qualitative validation. This significantly undercuts my ability to validate the correctness of the experiments.\\n\\nIn addition, the authors acknowledge appropriately that identifiability may not be necessary if the goal of MI is merely to steer a model. However, much MI work is driven by a desire to simply further scientific understanding of language models. What types of scientific inquiries require computational identifiability, and which do not? The paper could be strengthened by further discussing how much identifiability matters if the goal is scientific understanding, rather than just the model steering mentioned in lines 520-527.\", \"questions\": \"1. Could you elaborate on the meaning of \\u201cincompatible\\u201d in line 355? The paper would benefit from a clear example of two incompatible explanations, ideally in the main body.\\n\\n2. Please include in the appendix random examples of some of the circuits found so that they can be qualitatively assessed by readers.\\n\\n3.Could you comment on how these results relate to the discussion of Makelov et al. (2023) and Wu et al. (2024), cited above in section \\u201cStrengths\\u201d?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper aims to answer the question of whether mechanistic interpretability guarantee identifiability through small-scale MLP network which allow tractable exhaustive enumeration in the experiments, and the finding is no. The reviewers are generally satisfied with the authors' response and revise draft and raised scores after rebuttal discussion accordingly. Hence, an acceptance is recommended.\\n\\nNevertheless, one part that is still remain unclear is the experiments in sec 4.3, which is rather short and need more clarification on what explanations/functionality are found in the mnist network. The authors are urged to update sec 4.3 with more details like in Appendix B.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers have various concerns including:\\n1. What's the motivation of requiring identifiability in MI?\\n2. The details of major experiments are lacking\\n3. Applicability to large network in practice.\\n\\nThe authors answered 1 and 2 well by revising the draft with more accurate descriptions and providing details in the appendix. For 3, I think the authors can provide more results on sec 4.3 which will be beneficial to the community.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely thank you for your thoughtful and careful consideration of our work, as well as for maintaining the score. We appreciate the time spent reflecting on the details, and we're glad that our clarifications helped address the concerns about the reliance on DAS. Should any further questions arise, we are happy to provide additional insights.\\n\\nThank you again for your time and contribution.\"}", "{\"title\": \"Response to the reviewer\", \"comment\": \"We greatly thank the reviewer for their extensive and insightful feedback, as well as for appreciating the usefulness and relevance of our work and the exhaustiveness of our experiments. This review has been highly instrumental in the revisions we made to the paper.\", \"we_first_give_a_high_level_response_to_the_most_important_points_raised\": \"__On whether the conclusions about identifiability problems can be stated generally or only in particular for circuits as isolated objects:__\\nIt may indeed be the case that identifiability issues of the \\\"where-then-what\\\" scenario stem from the isolation of circuits from the rest of the network. This would point to a problem with the definitions, in this work, we use the precise goal of circuit discovery that has been formulated in previous work. Also, we emphasize that in the what-then-where, the IIA requirement is a strong case of activation patching. It relies on exhaustively applying counterfactual interventions to parts of the network in context. Specifically, the computation flow goes through the entirety of the network's components when evaluating mappings. As a result, the incompatible explanations found in this strategy indicate that identifiability problems cannot simply be reduced to mischaracterization of the functioning of the network as a whole.\\n\\n__On the extension of the possible improvements mentioned in the abstract/introduction:__\\nWe have largely rewritten Section 5 (discussion). Specifically, we now give in Section 5.2 several avenues of research through which identifiability issues may be resolved. Amongst these, we now discuss the inner interpretability framework mentioned in Vilas et al. (2024) as a promising path toward strengthening MI as an experimental science.\", \"we_now_proceed_to_respond_to_individual_feedback_points\": \"__On why a model's behavior should have a single, well-defined explanation, and whether MI assumes unicity:__\\nWe have introduced a new Section (2.4), in which we now argue that the unicity of explanation of a phenomenon is a strong intuition deeply rooted in human reasoning. After slightly rephrasing the abstract and introduction, we have removed phrasing implying that unicity is an explicit requirement. We now frame the unicity of explanation as a property that we might intuitively expect but that is clearly violated in our experiments. We also argue by extracting quotes (documented in Appendix C) that previous MI research implicitly assumes a unique explanation.\\n\\n> Trivially, the circuit plus additional neurons to form the full network is another such circuit.\", \"we_exclude_such_a_scenario\": \"every time we find two circuits that perfectly compute the behavior of the full network but one is included in the other, we only keep the smallest one. Similarly, for the what-then-where approach, we also only keep the smallest mapping that we find. That's why we always report the number of \\\"minimal mappings\\\" and not the number of mappings.\\n\\n__On whether non-identifiability makes MI hopeless:__\\nWe have clarified our positioning towards the need for identifiability. In Section 5, we question whether the lack of unicity poses a problem (5.1) and whether it is achievable in the first place (5.3). In summary, we think that, as a community, we could: (i) clarify the epistemic goals of an explanation (potentially accepting the lack of unicity), (ii) investigate less permissive criteria, and (iii) embrace broader frameworks like the inner interpretability framework.\\n\\n__On \\u201cthe challenge lies in defining criteria that distinguish valid explanations from misleading ones\\u201d:__\\nWe have removed this line from the manuscript, as it did not accurately reflect our positioning.\\n\\nIn addition, we have applied most of the suggested minor changes, including the following:\\n- Stronger justifications as to why a larger architecture could also lead to greater overparameterization have been given (l. 398-401).\\n- The \\u201cnear impossibility of exhaustively searching all possible algorithms\\u201d has been rephrased as \\\"intractability\\\" (l. 49).\\n- The distinction between circuit-finding heuristics and approximation algorithms has been clarified (l. 53-57).\\n- We have edited several citations: removal of the Van Rooij citation, and addition of Vilas et al., 2023 on vision transformers in MI, Adolfi et al., 2024 on the inner interpretability framework, and Adolfi et al., 2024 on computational complexity.\\n- We have clarified the notation in Definition 4 (l. 197).\\n- The initial statement about computational complexity has been adapted and sourced (l. 204)\\n\\nFinally, the source code used for our experiments will be released upon publication.\\n\\nPlease let us know if we can provide any additional information to clarify our work.\"}", "{\"title\": \"Response to the reviewer\", \"comment\": \"We greatly thank the reviewer for their time and feedback, and for highlighting the relevance of our work and the discussion on the limits of its scope.\\n\\nTo address the lack of examples, we have added a new appendix B, which contains more examples of circuits, interpretations, algorithms, and mappings found. In addition, we have clarified the text in the article to emphasize the fact that the example circuits given in Figure 2 were also found as part of our experiments. We hope this will help readers visualize and assess the quality of our work.\\n\\nWe have clarified our position concerning identifiability in two ways; first, we now argue in section 2.4 that the need for identifiability stems from a strong intuition rooted in human psychology and that while not clearly stated, it is already an implicit assumption of MI. The paper now frames the unicity of explanation as a property that we might expect but do not find.\\nSecond, we have extensively rewritten the discussion (Section 5), in which we describe in which scenarios the lack of unicity may or may not matter. In this context, we now cite the recent debate about the interpretability illusion as an example of why explicitly stating the purpose of the explanation is relevant to the question of identifiability.\\n\\nIn addition, we would like to clarify that the what-then-where approach in our work does not rely on DAS. While DAS only explores a part of the search space via gradient descent and therefore produces an approximate solution, in our experiments we find all exact maximizers of IIA by exhaustive enumeration. Furthermore, we compute the true IIA value, while DAS typically only approaches it by randomly sampling inputs.\\n\\nFinally, the source code used for our experiments will be released upon publication.\\n\\nPlease let us know if we can provide any additional information to clarify our work and thank you again for the feedback.\"}", "{\"summary\": \"A summary of mechanistic interpretability is provided. Two approaches are proposed that attempt to interpret a \\u201csimpler\\u201d algorithm that emulates the behavior of a trained neural network in terms of a circuit. One approach focuses on modelling the behavior of the full network before finding a subset of related nodes, while the second approach focuses on finding an \\u201cimportant\\u201d sub-network of the full network, whose behavior is then interpreted. Both approaches are showcased in simple toy examples.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The manuscript is clearly written, with a good explanation of the context the work is placed in. The toy examples provided are illustrative. The overarching conclusion is that uniqueness of an explanation should generally not be expected within the context of mechanistic interpretability, and while a similar analysis cannot be conducted on large-scale models, it is likely that the same behavior could be expected. This insight is important in many practical applications where network interpretations are required (post-training) by a practitioner.\", \"weaknesses\": \"The issue of uniqueness of an explanation is addressed in the context of mechanistic interpretability. However, the \\u201cincompatibility\\u201d of different explanations is not substantially addressed. A more formal framework in which incompatibility can be \\u201cmeasured\\u201d would be very interesting, along with analyzing questions on differentiating between equivalence classes of explanations.\", \"questions\": \"If the training set is drawn from a distribution with certain biases, there may be correlations that essentially encourage multiple \\u201cconflicting\\u201d interpretations of a network. Can we resolve some of the issues that arise by putting conditions on the training distributions?\\n\\nWhat would (be expected to) happen if, in a simple toy example, an experiment was repeated with a perfect training error, with or without overfitting? Would we see a qualitatively different distribution of explanations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5IBrWCeZtl
Co-Evolution Learning
[ "Zhenwei Long", "Peng Sun", "Zhenglin Cheng", "Deyuan Liu", "Tao Lin" ]
Generative and representation models, whether trained independently or evolved separately, require high-quality, diverse training data, imposing limitations on their advancement. Specifically, self-supervised learning, as a popular paradigm for representation learning, decreases the reliance on labeled data in representation models. However, it still necessitates large datasets, specialized data augmentation techniques, and tailored training strategies. While generative models have shown promise in generating diverse data, ensuring semantic consistency is still a challenge. This paper introduces a novel co-evolution framework (referred to as CORE) designed to address these challenges through the mutual enhancement of generative and representation models. Without incurring additional, unacceptable training overhead compared to independent training, the generative model utilizes semantic information from the representation model to enhance the quality and semantic consistency of generated data. Simultaneously, the representation model gains from the diverse data produced by the generative model, leading to richer and more generalized representations. By iteratively applying this co-evolution framework, both models can be continuously enhanced. Experiments demonstrate the effectiveness of the co-evolution framework across datasets of varying scales and resolutions. For example, implementing our framework in LDM can reduce the FID from $43.40$ to $20.13$ in unconditional generation tasks over the ImageNet-1K dataset. In more challenging scenarios, such as tasks with limited data, this framework significantly outperforms independent training of generative or representation model. Furthermore, employing the framework in a self-consuming loop effectively mitigates model collapse. Our code will be publicly released.
[ "Generative Models", "Representation Learning" ]
Reject
https://openreview.net/pdf?id=5IBrWCeZtl
https://openreview.net/forum?id=5IBrWCeZtl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "YsksFZJlJG", "SBL3BgUgkd", "IMSf63p1fT", "FnU8kwoVLB", "F8NtHER5f3" ], "note_type": [ "official_review", "official_review", "meta_review", "decision", "official_review" ], "note_created": [ 1730524276537, 1730631074185, 1735144472118, 1737524041192, 1730577780479 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10318/Reviewer_pTXW" ], [ "ICLR.cc/2025/Conference/Submission10318/Reviewer_HzNh" ], [ "ICLR.cc/2025/Conference/Submission10318/Area_Chair_kf94" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10318/Reviewer_GKML" ] ], "structured_content_str": [ "{\"summary\": \"This paper tackles a key challenge in advancing generative and representation models: the dependence on high-quality, diverse data for training. To address these limitations, the authors introduce a co-evolution framework that enables generative and representation models to improve each other. Both representation and generation models progressively strengthen their performance by iterating through this mutual enhancement process.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of co-evolution is interesting. It combines the two tasks in a unified framework and tries to help their corresponding model to improve each other in the mutual enhancement process.\\n2. The paper is well-organized, starting with a clear introduction of the current limitations and a detailed breakdown of the design of the proposed framework.\", \"weaknesses\": \"1. The use of a milder data augmentation strategy may have a limited impact on enhancing dataset diversity. Additionally, there is no ablation study to verify the effectiveness of this approach, even in Table 8, leaving its actual contribution to performance unclear.\\n2. An interesting observation in Table 2 is that using a weak generation model leads to a decline in the performance of the trained representation model. However, there is no analysis provided on this phenomenon or its potential risks, which would be valuable for understanding the limitations and stability of the proposed framework.\\n3. In the experiments across different datasets in Section 4.3, the generation model implementations vary, yet there is no clear explanations provided for these choices. \\n4. In the co-evolution experiments, it is unclear whether the generation model is trained from scratch or utilizes pre-trained generative capabilities. This lack of clarification makes it difficult to discern the true source of the observed training benefits.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors propose to learn simultaneously a representation model and a generative model following a mutual feedback loop. One path (R2G) uses the embeddings provided by the representation model to guide the learning of the generative model. The other path (G2R) leverages the generated images as augmented data to train the representation model. The combination of both is referred to as co-evolution (CORE). The experiments show that this setting improves the performance in both generative and representation models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The proposed approach is easy to understand, and provides moderate performance improvement.\", \"The paper is well structured and presented.\", \"Some experiments provide some useful insights.\"], \"weaknesses\": \"- In my opinion the novelty is very limited. R2G is equivalent to an autoencoder with a pretrained and fixed encoder, and G2R is equivalent to an autoencoder with reconstruction loss in the latent space, i.e. $l_{rec}\\\\left(\\\\hat{z},z\\\\right)$ with $z=f_1\\\\left(x\\\\right)$ and $\\\\hat{z}=f\\\\left(g\\\\left(z\\\\right)\\\\right)$, where the first encoder and the decoder are pretrained and fixed. These settings and their combination (i.e. CORE) have been extensively used in the context of autoencoders and image-to-image translation models (and cross-modal translation models). [A-D] are some early examples that come to mind with similar setting. The main difference is the use of more modern generative models (diffusion), but that is not novel in my view.\\n\\n[A] Unsupervised cross-domain image generation, ICLR 2017\\n[B] MUNIT: Multimodal Unsupervised Image-to-Image Translation, ECCV 2018\\n[C] Perceptual Generative Autoencoders, ICML 2020\\n[D] Mix and match networks: encoder-decoder alignment for zero-pair image translation, CVPR 2018\", \"questions\": \"Please address my concern about the novelty, and justify why the proposed model is significantly different from autoencoders with latent reconstruction loss.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces a co-evolution framework (CORE) that jointly trains generative and representation models to enhance each other iteratively. The framework leverages semantic embeddings from representation models to improve the semantic consistency of generated data and utilizes diverse generated data to enrich representations. The reviewers question the paper in its novelty and experiment scale. The authors do not provide rebuttals to address these problems, leading to a decision to reject this paper.\", \"additional_comments_on_reviewer_discussion\": \"No rebuttal is provided.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a co-learning framework called CORE to jointly learn the representation and generative models. Specifically, it has two components, R2G framework which uses pretrained representation vision encoder to project data into latent space z, and learn a generative models by maximizing the log-likelihood conditioned on the z. The second component is G2R, which can sample diverse data points and can be used to learn a better latent representation. Experiments show that co-evolving these two components can facilitate the task performance for representation/generative tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"-- The proposed method empirically found that co-training can boost the performance of generative models training efficiency by 30%\\n\\n-- The proposed Co-evolution of Representation modelsand Generative models (CORE) frame work is novel and interesting\", \"weaknesses\": \"--The paper is a bit hard to follow, for example, it is not clear what the main contribution of this framework after reading the introduction\\n\\n--Experiments only conducted on small-scale dataset, CIFAR10/100 etc, where both SoTA generative models or representation learning methods already mastered and hard to tell if the performance come from parameter tuning or joint learning.\", \"questions\": \"-- How practical it is to implement this framework as the learning is iterative instead of end-to-end?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5I39Zvlb3Y
Collu-Bench: A Benchmark for Predicting LLM Hallucinations in Code
[ "Nan Jiang", "Qi Li", "Lin Tan", "Tianyi Zhang" ]
Despite their success, large language models (LLMs) face the critical challenge of hallucinations, generating plausible but incorrect content. While much research has focused on hallucinations in multiple modalities including images and natural language text, less attention has been given to hallucinations in source code, which leads to incorrect and vulnerable code that causes significant financial loss. To pave the way for research in LLMs' hallucinations in code, we introduce Collu-Bench, a benchmark for predicting code hallucinations of LLMs across code generation (CG) and automated program repair (APR) tasks. Collu-Bench includes 13,234 code hallucination instances collected from five datasets and 11 diverse LLMs, ranging from open-source models to commercial ones. To better understand and predict code hallucinations, Collu-Bench provides detailed features such as the per-step log probabilities of LLMs' output, token types, and the execution feedback of LLMs' generated code for in-depth analysis. In addition, we conduct experiments to predict hallucination on Collu-Bench, using both traditional machine learning techniques and neural networks, which achieves 22.03 - 33.15% accuracy. Our experiments draw insightful findings of code hallucination patterns, reveal the challenge of accurately localizing LLMs' hallucinations, and highlight the need for more sophisticated techniques.
[ "large language model", "hallucination", "code generation", "automated program repair", "benchmark" ]
https://openreview.net/pdf?id=5I39Zvlb3Y
https://openreview.net/forum?id=5I39Zvlb3Y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zlxmEHMyju", "yjHty9jzlC", "Ry2g43PXw5", "NR6T3uNLtd", "GB5Vf7xVe5", "88o2seOCP4" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730704746802, 1730089421903, 1730780523923, 1732679589280, 1731102449827, 1730731074075 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12425/Reviewer_47va" ], [ "ICLR.cc/2025/Conference/Submission12425/Reviewer_GHSr" ], [ "ICLR.cc/2025/Conference/Submission12425/Reviewer_SD48" ], [ "ICLR.cc/2025/Conference/Submission12425/Authors" ], [ "ICLR.cc/2025/Conference/Submission12425/Reviewer_e4aw" ], [ "ICLR.cc/2025/Conference/Submission12425/Reviewer_JqwD" ] ], "structured_content_str": [ "{\"summary\": \"This study introduces Collu-Bench, a benchmark specifically designed to identify and analyze hallucinations in code generated by large language models (LLMs), addressing gaps in current research on code hallucinations. Collu-Bench includes 13,234 instances from five datasets produced by 11 different LLMs, focusing on two key tasks: code generation (CG) and automated program repair (APR). It provides detailed features such as per-step log probabilities, token types, and execution feedback for fine-grained analysis and prediction. Experiments using traditional machine learning and neural network models achieve a maximum accuracy of 33.15%, underscoring the challenge of this task. Findings reveal that LLMs show lower confidence in hallucinated outputs and are more prone to hallucinations with specific token types, highlighting the need to improve LLM reliability and accuracy in code generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Collu-Bench includes a comprehensive set of 13,234 instances across diverse LLM models and coding tasks.\\n\\nProvides valuable, fine-grained data such as log probabilities, token types, and execution feedback to support hallucination analysis.\\n\\nExperiments reveal key patterns, like low confidence during hallucinations and higher hallucination rates for specific tokens.\", \"weaknesses\": \"Achieved accuracy limits immediate applicability in practical settings.\\n\\nExcludes state-of-the-art models, potentially reducing relevance to newer LLM architectures.\\n\\nFocuses only on code generation and repair, missing other critical coding applications affected by hallucinations.\\n\\nIdentifies patterns but lacks actionable approaches to reduce hallucinations in practice.\", \"questions\": \"How effective is the automated sampling process in capturing a comprehensive set of canonical solutions, especially for more complex tasks in Defects4J and SWE-bench datasets?\\n\\nWhat are the limitations of the program normalization technique in accurately detecting hallucinations? Are there instances where the normalization process might incorrectly standardize genuinely distinct solutions?\\n\\nIn cases where the generated code subtly deviates from the canonical solutions, how does Collu-Bench ensure that the hallucination token is accurately identified without oversimplifying or introducing false positives?\\n\\nWhat criteria were used to select the five specific datasets, and how might additional datasets impact Collu-Bench\\u2019s robustness and versatility?\\n\\nThis paper includes 11 LLMs of various sizes and types. What is the reasoning behind selecting these specific models, and how might the inclusion of more recent or specialized models impact the benchmark\\u2019s findings?\\n\\nWhy do certain token types, like Keywords and Identifiers, appear more susceptible to hallucinations? Could this be influenced by the specific training data or architecture of the LLMs?\\n\\nThe analysis highlights different hallucination patterns across datasets, such as Defects4J showing a high hallucination rate for Operators and Identifiers. What underlying factors in these datasets contribute to these distinct hallucination profiles?\\n\\nHow does the per-token prediction approach compare with a per-example prediction regarding interpretability and practical application? Are there scenarios where one approach is more advantageous?\\n\\nTraditional ML models like Random Forest perform better in specific setups, while neural networks excel in others. What characteristics of hallucination prediction tasks make certain model types more suitable, and could a hybrid model improve results?\\n\\nThe highest accuracy achieved was around 33.15%. What are the main barriers to achieving higher accuracy, and are there known model improvements or alternative feature sets that could be integrated to boost predictive performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper constructs a benchmark containing hallucinated code generated by different LLMs. It also annotates the positions of hallucination tokens, aiming to identify where the model starts exhibiting hallucination behavior. The authors analyze from the perspective of model confidence and the types of hallucinated tokens, discovering corresponding patterns\\u2014for instance, models generally have lower confidence when dealing with hallucinated tokens. Additionally, they use some basic machine learning and deep learning models to identify model hallucinations and evaluate these predicters in different settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is easy to follow.\\n\\nFrom the perspective of model confidence, the paper identifies distinct patterns between hallucinated tokens and correctly generated tokens.\", \"weaknesses\": \"1. I believe that \\\"hallucination in code\\\" is fundamentally an ill-defined term, and it is inherently challenging to define. Specifically, in this work, it seems that hallucinated code and incorrect/buggy code are treated as entirely equivalent. Therefore, I think using this term without a rigorous definition is neither precise nor reliable.\\n\\n2. The finding that models exhibit low confidence on hallucinated tokens is very interesting. However, relying solely on token confidence to achieve high identification accuracy is insufficient. Currently, the performance of per-token prediction and per-sample prediction is quite similar, which indicates that the model heavily depends on the confidence feature for identification. However, I believe that this task should be analyzed more from the semantic perspective of the code, which might achieve higher accuracy. For instance, a naive approach, such as having the model review its own generated code, might yield decent identification accuracy.\\n\\n\\n3.This task does not seem fundamentally different from bug localization or program review. The objective in all cases is to identify parts that do not meet the code generation requirements. Program review, in particular, is even more challenging as it involves not only identifying but also correcting these parts.\\n\\n4.Even though the authors considered diverse canonical solutions, I believe that using text-based comparisons for data annotation remains imprecise, as there is no guarantee that the range of canonical solutions covers all possible solutions adequately.\", \"questions\": \"1. Before comparing the hallucinated code generated by the model with canonical solutions, do you use methods such as unit tests or program analysis to determine whether the code does not meet the intended generation?\\n\\n2. Did you remove comments when processing model-generated data, as many models, such as GPT-4, may include annotations for the generated statements?\\n\\n3. refer to weakness 3, what are the differences between this benchmark and tasks like program review or bug localization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents Collu-Bench, a benchmark for detecting code hallucinations in outputs from large language models. With over 13,000 instances from 11 models, it helps assess hallucination localization using various data points. It highlights the challenge and need for improved LLM reliability in coding.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper provides a dataset with rich information to analyze hallucination in coding tasks\", \"The authors reveal patterns of code hallucinations across data sources and LLMs\"], \"weaknesses\": [\"The method for ground truth hallucination localization is overly simplistic and may not apply to complex cases, despite the method proposed in section 3.1 (see Questions)\", \"The finding of \\\"LLMs are less confident when hallucinating\\\" is not novel and has been widely used for detecting hallucinations, e.g. [1], [2], [3], to name a few. However, I appreciate the authors' experiments studying finer-grained hallucination positions in coding tasks. The authors should emphasize more on their new findings specifically on this domain.\", \"The localization methods only take the probability distribution of top-100 tokens into account, without considering the semantic meanings of the tokens, nor the execution feedbacks.\", \"More hallucination detection baselines should be discussed and compared.\", \"Lack of discussion of the proposed \\\"code hallucination\\\" vs bug localization.\", \"[1] Xiao, Yijun, and William Yang Wang. \\\"On Hallucination and Predictive Uncertainty in Conditional Language Generation.\\\" Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. 2021.\", \"[2] Guerreiro, Nuno M., Elena Voita, and Andr\\u00e9 FT Martins. \\\"Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation.\\\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.\", \"[3] Zhang, Tianhang, et al. \\\"Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\"], \"questions\": \"- How do you define hallucination on coding tasks? How does it compare to the bug localization task?\\n\\n- Discussions about hallucination localization in dataset creation:\\n - The per-token hallucination localization method in section 3.3 still looks weak to me after canonical solution sampling. The proposed methods addresses the problem of \\\"identifier variability\\\", but how to tackle the problem of semantically identical problems? For example, how do you detect the hallucination location if the ground truth is `return all(v1 > v2 for v1, v2 in zip(tup1, tup2))` and the generation is \\n ```python\\n for v1, v2 in zip(tup1, tup2):\\n if not v1 < v2:\\n return False\\n return True\\n ```\\n An error rate of 14% is reported in section 4.2. How does this affect the usability of the dataset? Is it possible to provide a clean subset of the dataset to train localizers to figure out the impact of wrong annotations?\\n\\n\\n - In section 3.3,\\n > As there could be multiple unique normalized canonical solutions per problem, we calculate the hallucination token indices between the LLM-generated program and every unique canonical solution and eventually take the largest hallucination token index.\\n\\n What is the reason and how accurate is the design of taking the largest index? Moreover, how do you handle multiple hallucinations in the code? Will keeping only one hallucination index cause false negatives training detectors?\\n\\n- Table 1 shows a major source of hallucinations is *keyword*. However, is it related to the process of program normalization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors introduce Collu-Bench, a benchmark designed to evaluate code hallucinations in LLMs. This benchmark includes 13,234 instances of code hallucinations from 11 different LLMs across five datasets, covering both code generation and automated program repair tasks. Collu-Bench\\u2019s innovation lies in its automated process that combines program equivalence and identifier variation to locate hallucinated tokens accurately. The benchmark provides detailed signals, including the log probability at each step, token types, and execution feedback. The authors conduct preliminary experiments using traditional machine learning and neural network methods to predict hallucinations, with prediction accuracy ranging from 22.03% to 33.15%. Overall, this benchmark aims to advance the understanding, prediction, and mitigation of hallucinations in automated code generation and program repair tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Collu-Bench differs from previous benchmarks by focusing on finer-grained code hallucinations, providing a new benchmark that includes richer features such as log probabilities and execution feedback. It aims to deepen understanding and predict where hallucinations occur.\\n\\n2. The authors write the paper clearly, emphasizing the importance of the problem. The structure of each section is well-organized, making it easy to understand the motivation, methodology, experimental setup, and conclusions of Collu-Bench.\\n\\n3. The authors execute their experiments effectively, from benchmark construction to analysis and results. They offer detailed descriptions of the findings, complemented by visualizations of experimental results, which enhance the persuasiveness of the conclusions.\", \"weaknesses\": \"1. The authors provide an introduction in Section 3 on how Collu-Bench is constructed and how they generate the Ground Truth. However, I am concerned about the accuracy and quality of the Ground Truth generation method. Despite performing a manual review, the authors achieve only an 86% accuracy rate, which introduces potential bias during evaluation. Moreover, the sample size for manual verification (100 samples) is relatively small compared to the dataset\\u2019s scale. How do the authors address the issue of low Ground Truth quality?\\n\\n2. The detection of hallucinations relies on comparing the generated code with a \\\"standard\\\" solution, which may not cover all possible correct solutions, potentially leading to inaccurate hallucination detection. How do the authors address this issue to ensure more accurate hallucination detection?\\n\\n3. In Sections 5.1 and 5.2, the authors merely describe the experimental results without providing detailed analysis. Could they offer more specific insights into why these experimental results occur? For example, why does GPT-4o-mini exhibit the most unique hallucination patterns? Why does the predictor trained on Llama3-8B data generalize well to content generated by most other LLMs? And why do Transformer models perform with relatively low accuracy on Collu-Bench?\", \"questions\": \"1. Could the authors provide specific case studies? Do they examine whether certain types of programming tasks or problem structures are more likely to trigger hallucinations? Providing a more detailed error analysis would be helpful, especially in cases where hallucinations are misidentified or overlooked. Are there specific features or patterns that lead to these errors?\\n\\n3. The authors present a large evaluation dataset, which in practice may make it challenging for researchers with limited computational resources to replicate the results. For instance, the authors themselves do not use all 2,294 entries in SWE-Bench. Do the authors have any specific measures to address this issue?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper successfully introduces Collu-Bench, a challenging benchmark for code hallucination localization. It includes 13,234 hallucination instances generated by 11 diverse LLMs across five datasets, offering a comprehensive evaluation of hallucination localization across multiple models. Furthermore, Collu-Bench provides additional information such as per-step log probabilities produced by LLMs, types of generated tokens, and execution feedback, which are useful signals for predicting code hallucinations. Through extensive experiments using traditional machine learning techniques and neural network models as hallucination predictors, the paper provides an in-depth study of hallucination localization using Collu-Bench. Preliminary results indicate that traditional ML methods and neural networks can only achieve an accuracy of up to 33.15%, highlighting the complexity of this task and emphasizing the need for further research to improve the trustworthiness and reliability of LLMs in code-related applications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper clearly defines the problem of code hallucination in LLMs and provides a comprehensive benchmark for research in this area. 2. The inclusion of diverse LLMs and datasets is a significant contribution to the field.\\n3. The paper presents a well-structured approach to collecting and analyzing code hallucination instances. The automated pipeline for handling program equivalency and identifier variability is innovative and adds value to the benchmark.\\n4. The experiments conducted using traditional machine learning techniques and neural networks are thorough and provide valuable insights into the patterns of code hallucination. The findings highlight the challenges and potential areas for future research.\", \"weaknesses\": \"1. Code models generate hallucinatory code, what kind of code can be referred to as hallucinatory code? The definitions of hallucinatory code and hallucinatory tokens in the text are inaccurate. In the abstract section, the authors mention \\\"content that sounds plausible but is actually incorrect\\\", this definition is too vague. In the construction of Collu-Bench, the authors consider samples that fail to pass test cases as hallucinatory code and the first token that differs from the standard solution as the hallucinatory token. This is clearly not accurate enough. Failing to pass test cases indicates that the code is incorrect, but it does not necessarily mean it is hallucinatory code.\\n\\n2. While less attention has been given to hallucinations in source code as mentioned in the abstract, there are still several works that address this issue. The paper needs to compare the Collu-Bench dataset with other efforts, such as CodeMirage and CoderEval, to highlight their differences.\\n- CodeMirage: Hallucinations in Code Generated by Large Language Models. https://arxiv.org/abs/2408.08333\\n- CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models. https://arxiv.org/abs/2302.00288\\n\\n3. Hallucinatory code should be deceptive code that appears reasonable to humans but is actually incorrect. How can we ensure that the code sampled from LLMs that fails to pass test cases is also seemingly reasonable to humans and likely to be misused?\\n\\n4. The purpose of the dataset is to reduce the likelihood of LLMs generating hallucinatory code. However, the dataset is primarily used to enhance the model's ability to predict hallucinatory code and hallucinatory tokens. Enhancing the model's predictive capabilities for hallucinatory code and tokens does not necessarily reduce the probability of LLMs generating hallucinatory code.\\n\\n5. Does the normalization process of the code in this paper potentially destroy or lose the semantics of the original code?\\n\\n6. In the process of constructing the dataset, it is taken for granted that code that fails to pass test cases is considered hallucinatory code. In reality, such code is not equivalent to hallucinatory code. The dataset constructed in this way contains both \\\"hallucinatory code\\\" and \\\"code with obvious errors that do not cause hallucinations.\\\" If \\\"code with obvious errors that do not cause hallucinations\\\" is not excluded, then the dataset itself has issues, and all subsequent results lack a solid foundation.\\n\\n7. The extent to which LLMs produce hallucinatory code in the dataset construction lacks explanation. Why are some LLMs more prone to generating hallucinatory code, while others are not as likely to produce such code?\\n\\n8. The article mentions and briefly compares CodeHalu and HalluCode, both of which classify and define code hallucinations. However, the concept of hallucinatory code in this paper is vague. The authors should also provide a detailed definition of the concept of hallucinatory code and categorize them.\\n\\n9. The results of various experimental models on the Collu-Bench dataset lack detailed explanations. Why do some methods perform poorly/well, and what are the reasons for their poor/good performance?\\n\\n10. The Collu-Bench dataset currently covers only Java and Python languages. It would be beneficial to construct a dataset that includes more mainstream programming languages, such as C, C++, and Go.\\n\\n11. Consider conducting a more overall human evaluation of the dataset's quality and the accuracy of annotations.\\n\\n12. The dataset relies on LLMs for annotation, but LLMs are not fully reliable, this may lead to incorrect token locations. How to identify and correct errors in the dataset?\\n\\n13. Despite the reduction, the error rate remains relatively high, with 14 out of 100 randomly sampled instances flagged as questionable. How can the error rate be further lowered?\\n\\n14. The paper could benefit from a more detailed discussion of the implications of the findings and how they relate to existing work in the field.\", \"questions\": \"1. How can we ensure that the code sampled from LLMs that fails to pass test cases is also seemingly reasonable to humans and likely to be misused?\\n2. Does the normalization process of the code in this paper potentially destroy or lose the semantics of the original code?\\n3. Why are some LLMs more prone to generating hallucinatory code, while others are not as likely to produce such code?\\n4. Why do some methods perform poorly/well, and what are the reasons for their poor/good performance?\\n5. How to identify and correct errors in the dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5GuhYMgaap
Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities of LLMs
[ "Kewei Cheng", "Jingfeng Yang", "Haoming Jiang", "Zhengyang Wang", "Binxuan Huang", "Ruirui Li", "Shiyang Li", "Zheng Li", "Yifan Gao", "Xian Li", "Bing Yin", "Yizhou Sun" ]
Reasoning encompasses two typical types: deductive reasoning and inductive reasoning. Despite extensive research into the reasoning capabilities of Large Language Models (LLMs), most studies have failed to rigorously differentiate between inductive and deductive reasoning, leading to a blending of the two. This raises an essential question: In LLM reasoning, which poses a greater challenge - deductive or inductive reasoning? While the deductive reasoning capabilities of LLMs, (i.e. their capacity to follow instructions in reasoning tasks), have received considerable attention, their abilities in true inductive reasoning remain largely unexplored due to the inseparability of the two types of reasoning in most of the tasks. To delve into the true inductive reasoning capabilities of LLMs, we propose a novel framework, SolverLearner. This framework enables LLMs to learn the underlying function (i.e., $y = f_w(x)$), that maps input data points $(x)$ to their corresponding output values $(y)$, using only in-context examples. By focusing on inductive reasoning and separating it from LLM-based deductive reasoning, we can isolate and investigate inductive reasoning of LLMs in its pure form via SolverLearner. Our observations reveal that LLMs demonstrate remarkable inductive reasoning capabilities through SolverLearner, achieving near-perfect performance with ACC of 1 in most cases. Surprisingly, despite their strong inductive reasoning abilities, LLMs tend to relatively lack deductive reasoning capabilities, particularly in tasks involving ``counterfactual'' reasoning.
[ "Reasoning", "LLM", "Inductive", "Deductive" ]
Reject
https://openreview.net/pdf?id=5GuhYMgaap
https://openreview.net/forum?id=5GuhYMgaap
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qIZZLbKJsU", "YQNyY0knSB", "NgdXLg6LiZ", "GsjM7vyQlK", "BM92o5hcdG", "ApOqTTs00x" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1734559916784, 1730677590991, 1730022248903, 1730616781146, 1737523477463, 1730696118367 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1965/Area_Chair_fJ57" ], [ "ICLR.cc/2025/Conference/Submission1965/Reviewer_Nc7T" ], [ "ICLR.cc/2025/Conference/Submission1965/Reviewer_sVwR" ], [ "ICLR.cc/2025/Conference/Submission1965/Reviewer_ee9E" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1965/Reviewer_Rgex" ] ], "structured_content_str": [ "{\"metareview\": \"All reviewers agreed that this paper potentially provides a novel way to understand the interaction with LLMs. The set of benchmarks is varied, and the paper is rather well-written.\\n\\nHowever, the novelty should be made clearer, and the work should provide a better placement within the existing literature on reasoning, in particular with respect to code generation or in-context learning. Moreover, a clearer and more formal definition of what is induction and what is deduction in this context is needed. Real-world scenarios or at least the potential to aim for those should be discussed better.\", \"additional_comments_on_reviewer_discussion\": \"Since there was no rebuttal provided by the authors, the listed weaknesses of the paper could not be resolved in the discussion.\"}", "{\"summary\": \"The paper starts from the observation that there are two forms of reasonung in LLMs: inductive and deductive. The authors empirically study prformance of the two approaches, and conclude that LLMs do better with induction.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors present several contributions to a major research domain in IA:\\nThey introduce a notion of several forms of reasoning going on the system,\\nThey implement a system to validate their claims.\\nThey experimentally obtain unexpected results \\n\\nThe paper is also rather easy to follow.\", \"weaknesses\": \"I have two major difficulties with the paper:\\n\\nAs the authors themselve observed, the two forms of reasoning are very much intertwined. ALthough Fig 1 does a nice job at explaining the concepts, as i read I felt the need for a more formal definition of what is induction and what is deduction. This is especially true when I looked at the discussion, and I felt I could not understand why statements such as \\n\\\". By\\ncompletely disentangling the inductive reasoning of LLMs, our proposed SolverLearner shows the\\nremarkable inductive reasoning capabilities inherent in LLMs.\\\"\\n\\nI also felt the notions of induction and deduction may take somewhat different meanings for different researchers\\n\\nSecond, I would have hoped for a deeper insight into these results. You mention the remarkable induct reasoning of LLMs, but it would be nice (at least for me) to understand how they appear. Also, why deduction performs worse?\", \"function_execution\": \"if you do it outside the LLM, is it still a LLM?\\n\\n8-IO w/ Mapping Function (MF): is this deductive or a mix?\\n\\nThe results show a noticeable improvement between chat-gpt 3.5 and 4. ANy idea why?\\n\\nThere are a few typos and in a few cases bad english \\\"\", \"questions\": \"Other problems/questions:\\n\\nI understand why you use chat-gpt, but it does make your work non-reproducible. It would be helpful to complement results with an open-source system (it would also help in making your conclusions more general).\", \"wu23_paper\": \"why only datasets from this work. It seems highly related, why is it not discussed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper evaluates the inductive and deductive reasoning capabilities of large language models using a framework named SolverLearner. Through tasks including arithmetic, syntactic reasoning, spatial reasoning, and cipher decryption, the authors claim that LLMs perform well in inductive reasoning but struggle with deductive reasoning tasks. The topic of reasoning ability of LLMs is interesting and important. The overall presentation of the paper is clear and fluent.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"(1) The presentation is clear, fluent, and easy to follow. The task formulation is clear, and the introduction of the proposed framework is transparent.\\n\\n(2) The topic of reasoning ability of LLMs is crucial and in need of exploration.\\n\\n(3) The experiments are detailed introduced, with all settings and prompts attached in the appendix.\", \"weaknesses\": \"(1) The task setup does not adequately reflect real-world scenarios. The tasks used in the evaluation, particularly the arithmetic tasks across different number bases and synthetic syntactic tasks, are highly artificial and contrived. These tasks do not reflect realistic reasoning challenges that LLMs would face in natural language processing or human cognition. For example, arithmetic problems in base-9 or base-11 are not commonly encountered in real-world settings. The syntactic reasoning tasks are also simplistic, relying on predefined sentence structures with fixed subject-verb-object patterns. More realistic scenarios, such as understanding context-dependent syntactic reordering or handling ambiguous language, would make the evaluation more robust and relevant to practical applications.\\n\\n(2) The proposed framework has limited generalizability. The proposed SolverLearner, while effective for some inductive tasks, does not generalize well to broader inductive reasoning challenges. The tasks in the study are highly structured and constrained (e.g., learning the mapping function in base-specific arithmetic), where a unique solution exists for the inductive task. In more complex scenarios, such as reasoning about abstract concepts, learning open-ended rules, or inducing general principles from noisy data, the SolverLearner framework may not be effective. The paper does not discuss how this method could scale to such more complex inductive challenges, where the learning task is not well-defined and may involve multiple plausible solutions.\\n\\n(3) Comparison with other reasoning frameworks is insufficient. The paper fails to adequately compare its results with alternative approaches to reasoning in LLMs, such as chain-of-thought prompting, least-to-most prompting, or retrieval-augmented generation. While SolverLearner is presented as a novel method for isolating inductive reasoning, the lack of comparison with existing techniques leaves its relative merits unclear. For example, chain-of-thought prompting has been shown to improve both inductive and deductive reasoning in various tasks by breaking complex problems into smaller reasoning steps. Without a direct comparison, it is difficult to assess whether SolverLearner offers any significant advantage over these established methods. Including such comparisons would have strengthened the evaluation.\\n\\n(4) The scope of deductive evaluation is too narrow. The deductive reasoning tasks primarily focus on counterfactual arithmetic (e.g., base-9 vs. base-10 arithmetic), which is a very specific case. Deductive reasoning encompasses more than just counterfactual logic\\u2014it includes formal logic, rule-based reasoning, and mathematical proofs. The paper does not evaluate these broader aspects of deductive reasoning, such as tasks that involve symbolic logic, proof generation, or formal theorem proving. This limited scope weakens the claim that LLMs perform poorly in deductive reasoning overall. For example, the study might have included tasks like syllogisms or multi-step logical deductions, which would provide a broader view of LLMs' deductive reasoning capabilities.\", \"questions\": \"The paper claims that SolverLearner isolates inductive reasoning, but this separation is not convincingly demonstrated. For example, in the arithmetic task of base-8 addition, the process of identifying the base from examples is considered as inductive reasoning. I wonder if the authors could provide more convincing evidence to show that the model is truly performing inductive reasoning instead of simply pattern matching based on prior exposure to similar tasks?\\n\\nNote that, using Python interpreters to prevent LLM involvement in the \\\"deductive\\\" step (function execution) does not fully eliminate the possibility that LLMs leverage both types of reasoning in the previous \\u201cinductive\\\" step. The distinction remains unclear because the task structure could involve deductive elements when identifying input-output mappings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The proposed approach aims to disentangle deductive from inductive capabilities of an LLM. The main contribution is a series of tasks where each task is has both an inductive as well as a corresponding deductive component. The results show that LLMs perform more poorly in deductive reasoning as compared to inductive reasoning on the tasks designed to test both.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strengths\", \"Disentangling the inductive and deductive capabilities of LLMs seems like an interesting problem\", \"The types of benchmarks used are varied and several of the state-of-the-art LLMs have been considered in the evaluation\"], \"weaknesses\": [\"Weakness\", \"The paper seems to suggest that solverlearner is a novel approach, but it is less clear why this is the case. As far I could understand, solver learner just utilizes an external code interpreter to apply the functions learned by the LLM inductively. I was not clear of the complexity involved to do this, since the approach itself is not described in detail. Further, are there other ways of decoupling the two, since there was not a lot of context in why this is the right way for disentanglement.\", \"The tasks itself also seem to be from prior work (Wu et. Al 2023) apart from the cipher task. Once again, I was not sure if the contribution of the tasks was significantly different from prior work.\", \"Regarding the foundational aspect as such, based on the definition of deductive/inductive inference, since the LLMs are being used a bit like black-boxes, I was not sure about the leap from observing the experimental results to concluding the \\u201ctype\\u201d of inference the LLM is truly performing internally. For e.g. memorization is one aspect that could be affecting the way a LLM is solving a particular task.\", \"In terms of the significance of the study, is the fact that deduction is harder than induction significant, i.e., what would be a good application use-case to motivate this study is something that was missing.\"], \"questions\": [\"Some additional comments about the novelty of the proposed evaluation and its significance in LLMs would be useful.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper studies LLM capabilities in inductive and deductive reasoning, and compares the performance gap between the two poles of reasoning. They consider this by framing reasoning as a function (which connects input and output) definition task, with\\n- deductive: the model is provided with the function (direct input-output mappings)\\n- inductive: the model is given examples (x,y) pairs but without the function \\n\\nWith the framework defined, they test the reasoning processes of LLMs across 4 primary subtasks: arithmetic, basic syntax reasoning (syntactical recognition and identification), spatial reasoning, and a novel cipher decryption task of their own design. Their finding suggests that LLMs seem to be stronger inductive reasoners rather than deductive. In particular, tasks that involve counterfactual reasoning are particularly challenging even with strong inductive performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"I think the main strength of this task is that it provides a new way of looking at various ways we interact with LLMs. Using a framework of inductive and deductive reasoning, we can perhaps consider in-context learning as a deductive reasoning task and code generation as an inductive reasoning task. I think it would have been more salient to frame the paper this way, and therefore, more relevant to many other communities, particularly with LLM evaluation communities. Below describes the strengths in more specifics.\", \"S1. This paper discusses the distinction between inductive and deductive reasoning, and how we may systematically investigate this using a novel framework.\", \"S2. In providing a novel framework, they provide a novel task of their own design. A cypher description task . This is a particular strength, because many available evaluative framework and benchmarks could have already been used in pretraining of many closed LLMs. With an introduction of a novel task, they can robustly test.\", \"S3. This potentially adds a novel way of looking at code generation and reasoning together.\", \"S4. I can see how we use in-context learning could be considered from the perspective of inductive and deductive reasoning with this framework.\", \"S5. The framework effectively describes a spectrum between deductive and inductive reasoning. Inductive and deductive reasoning are not always distinctly delineated as I had previously conceptualized, so it was interesting for me to consider.\", \"S6. The paper was well written with mostly clear description.\", \"S7. This framework and subtasks were thoroughly experimented with many current SOTA LLMs\"], \"weaknesses\": [\"As pointed out in the strengths, I think this paper has the potential to make us consider looking at how we interact with LLMs in a novel way. However, I think the formulation of the question and presentation of the main thesis could be significantly improved. The paper does not adequately situate itself with existing literature on reasoning, or discuss the relations between this work with respect to code generation or in-context learning.\", \"W1. The paper and their findings were hard to connect to existing work. I think it would have made the paper stronger to consider how other work in reasoning area compare with this approach. There have been numerous work on deductive, inductive, abductive, counterfactual etc reasoning. I think there was very few discussion on the prior work, and therefore, this work was poorly situated.\", \"W2. The framework relies on a particular case of generation, which is code generation. I think the performances could very much differ in the case of natural language generation and inductive reasoning. I think it's insufficient to generalize the findings of this paper to the broad deductive/inductive reasoning gap of LLMs\", \"W3. \\\"Current methods that investigate deductive and inductive reasoning often rely on disparate datasets\\\" may not true: LogiGLUE (https://arxiv.org/pdf/2310.00836) for example considers both categories in their datasets.\", \"W4. I believe there were prior work on inductive and deductive reasoning, and some of these prior work does discuss the gap between them. On the claim of novelty, I believe that the question itself is not novel enough. The framework may be novel.\", \"W5. This work does not consider finetuned models, but it would have been interesting to consider them, particularly in discussion with deductive/inductive reasoning and seen examples. The paper does mention some probable explanation for some performance gaps on examples seen/unseen during pretraining.\", \"W6. I think there could have been discussion on how this relates to the code generation performance. There are many existing benchmarks for code generation (e.g. BigCodeBench, HumanEval), and because this framework relies on code generation for an external executor, it should be discussed as what these evaluative benchmarks and testing on with respect to deductive/inductive reasoning.\", \"W7 The writing could be improved in some parts of the paper. I found the deductive part to be a bit lacking in discussion.\", \"I believe this work has a lot of potential and it was very interesting to read about this framework! I hope to see this work out, but I wish it was more thoroughly considered and better presented/situated in connection with existing work in the field.\"], \"questions\": [\"Q1. Has inductive and deductive reasoning not studied before in previous literature? I believe there are existing work or at least work on either of the categories. (for instance, for deductive reasoning, see: https://aclanthology.org/2023.findings-acl.67.pdf, https://openreview.net/forum?id=KFjCFxiGk4; for inductive reasoning, see: https://arxiv.org/abs/2309.05660). Please review existing literature and include them in your related work. Perhaps the distinction between the two has not been made explicit, which I believe is a fair contribution, but please acknowledge existing work.\", \"Q2. How do you expect this to connect to benchmarking and evaluations of LLMs? How would this improve robustness of LLMs?\", \"Q3. I think this work could benefit by considering the tension between memorization (as briefly discussed in the paper about models performing better on the examples seen during the pretraing phase) vs. in-context learning. What would be the connection of inductive/deductive reasoning and in-context examples in this framework?\", \"Q4. Why is it important to distinguish deductive and inductive reasoning? (I believe it *is* important, but I wish the authors to consider this question. In my opinion, it could be useful particularly in the application of LLMs and improving performances of various symbolic reasoning involved generation such as code generation, scientific LLMs, or verification/formal language modeling with LLMs. Perhaps if the work was situated better and considered within the context of generation problems, the motivation behind this distinction would have been better argued in the paper.)\", \"Q5. Do you intend to release the datasets and prompts used for the tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5GgjiRzYp3
Intent3D: 3D Object Detection in RGB-D Scans Based on Human Intention
[ "Weitai Kang", "Mengxue Qu", "Jyoti Kini", "Yunchao Wei", "Mubarak Shah", "Yan Yan" ]
In real-life scenarios, humans seek out objects in the 3D world to fulfill their daily needs or intentions. This inspires us to introduce 3D intention grounding, a new task in 3D object detection employing RGB-D, based on human intention, such as "I want something to support my back." Closely related, 3D visual grounding focuses on understanding human reference. To achieve detection based on human intention, it relies on humans to observe the scene, reason out the target that aligns with their intention ("pillow" in this case), and finally provide a reference to the AI system, such as "A pillow on the couch". Instead, 3D intention grounding challenges AI agents to automatically observe, reason and detect the desired target solely based on human intention. To tackle this challenge, we introduce the new Intent3D dataset, consisting of 44,990 intention texts associated with 209 fine-grained classes from 1,042 scenes of the ScanNet dataset. We also establish several baselines based on different language-based 3D object detection models on our benchmark. Finally, we propose IntentNet, our unique approach, designed to tackle this intention-based detection problem. It focuses on three key aspects: intention understanding, reasoning to identify object candidates, and cascaded adaptive learning that leverages the intrinsic priority logic of different losses for multiple objective optimization.
[ "3D Visual Grounding", "3D Multimodal Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=5GgjiRzYp3
https://openreview.net/forum?id=5GgjiRzYp3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yC5zIb0AAQ", "rzwASjHVUB", "rum0yEQVbs", "qkDPwGsVL9", "oRfzdyPC2Q", "nifPY1nUmb", "lWlbaQx440", "lE0vDllw6I", "jf6NvHzBfT", "imzkm59Cj4", "hgZzkw3Lcr", "g0jTEHGAdq", "cncZGcX58W", "avUA49V4t6", "Xy3t9opz7D", "PNomJo9G1E", "PEcSqiZUfW", "N3G2xjziTy", "Ma7pZKgSgV", "LCUKzjddl1", "IRLpQl0N4T", "I7l5AktKu3", "EHkaFDZDWR", "9PUgkllFUq", "7x1tesVvEK", "6WDH2N6cNW", "5bHfaW26Pn", "431yNiA9WW", "1XkOOPtT6H" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review" ], "note_created": [ 1732172675565, 1732171365678, 1737523576660, 1732543181082, 1732258975164, 1732679026119, 1733157526060, 1732172371038, 1732543159316, 1732679112532, 1732259009538, 1732579066321, 1732258987748, 1730698506782, 1732821299393, 1732543136708, 1732169626107, 1733100680867, 1732585064814, 1733157163456, 1732482230728, 1732259000277, 1729455641323, 1732171845264, 1733100949249, 1732172961838, 1734677285852, 1730371490788, 1730180543561 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Reviewer_qHNc" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Reviewer_gGZj" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Reviewer_qHNc" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Authors" ], [ "ICLR.cc/2025/Conference/Submission3450/Area_Chair_iohH" ], [ "ICLR.cc/2025/Conference/Submission3450/Reviewer_tzNR" ], [ "ICLR.cc/2025/Conference/Submission3450/Reviewer_kRaB" ] ], "structured_content_str": [ "{\"comment\": \"## **Response 4.1: Addressing Weakness 1 -- On the Necessity of Intent3D Dataset and IntentNet model dedicated to 3D Intention Grounding**\\n\\nThank you for this insightful question. While a modularized approach may seem flexible, it is not well-suited for addressing the fine-grained, multimodal reasoning required in 3D intention grounding. The modularized pipeline, which separates detection and intention reasoning, introduces two critical issues:\\n\\n1. Error Accumulation: The pipeline accumulates errors from both the single-modal 3D detector and the LLM's reasoning.\\n2. Hallucination from LLMs: Large language models suffer from the hallucination problem, necessitating rigorous comparisons to evaluate their actual performance.\\n\\nAs demonstrated in Line475, we actually have already conducted experiments with a modularized approach using GPT-4 in Appendix (A.6 page 17). In this setup, a 3D detector identifies all objects in the scene, and GPT-4 determines which objects satisfy the intention. As shown in the table below, the performance of this modularized method is inferior to our IntentNet, underscoring its limitations.\\n\\n| Model | [email protected] | [email protected] | [email protected] | [email protected] |\\n|--------------------|---------------|--------------|---------|--------|\\n| GPT-4 + Proposal | 41.40 | 28.40 | 15.10 | 7.76 |\\n| IntentNet | 58.34 | 40.83 | 41.90 | 25.36 |\", \"this_actually_resonates_with_the_trend_in_visual_grounding_field\": \"- In 2D, approaches have evolved from two-stage methods like MAttNet[1] to one-stage methods like SegVG[2], which emphasize necessity of multimodal fusion instead of modularization.\\n- Similarly, in 3D, methods have progressed from ScanRefer[3] to EDA[4], showing that advanced approaches increasingly prioritize integrated multimodal reasoning over modularized pipelines.\\n\\nA dedicated dataset for 3D intention grounding is essential, as it provides a focused benchmark to rigorously drive progress in this multimodal reasoning problem. And our IntentNet provides a strong baseline whose novel contributions in verb reasoning and verb-object reasoning can bring inspirations to our community on tackling this intention grounding problem.\\n\\n[1] MAttNet: Modular Attention Network for Referring Expression Comprehension, CVPR 2018\\n\\n[2] SegVG: Transferring Object Bounding Box to Segmentation for Visual Grounding, ECCV 2024\\n\\n[3] ScanRefer: 3D Object Localization in RGB-D Scans using Natural Language, ECCV 2020\\n\\n[4] EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding, CVPR 2023\\n\\n## **Response 4.2: Addressing Weakness 2 -- On the Determination of Six Intentions per Object and the Completeness and Effectiveness of dataset**\\n\\nIn our implementation, we find that six sentences on average are enough to cover diverse intentions toward each object, while preventing repetition. Also, we reference the scale of ScanRefer and Nr3D, two widely used datasets in the field of visual grounding. Based on this scale, an average of 6 intentions per object can result in a dataset of nearly 45K.\\n\\nAs detailed in Section 3.3, our dataset exhibits a rich diversity in sentence structure and verb-noun combinations to show its completeness:\\n\\n- Verbs: A total of 1,568 distinct verbs are included.\\n- Nouns: A total of 2,894 distinct nouns are used.\\n- Object category: A total of 209 fine-grained classes.\\n- Diversity per category: On average, each object category is associated with 58 unique verbs and 77 unique nouns.\\n\\nThis extensive coverage ensures the dataset covers a wide range of human intentions across different scenarios.\\n\\nAdditionally, the performance in Tables 1 and 2 demonstrates that our model achieves strong results on both validation and test sets, confirming the effectiveness of the generated intentions which are sufficient for training.\"}", "{\"comment\": \"## **Response 2.1: Addressing Weakness 1 and Question 1 -- Advantages Compared to LLM-Based Multimodal Methods**\\n\\n[Advantages]\\nLLM-based multimodal methods, such as Chat3D-v2 or the GPT-4 two-stage approach (Line475), primarily operate at the object level. This dependency on off-the-shelf detectors for object proposals introduces detector\\u2019s error and also limits their fine-grained multimodal fusion. Lack of multimodal fusion leads to their poor performance. Such kind of shortage have also been witnessed in 2D domian. For example, previous works including POPE[1] reveal that multimodal LLMs often hallucinate in the basic object existence problem due the lack of fusion. This challenge becomes more pronounced in fine-grained multimodal tasks like our 3D intention grounding, where comprehensive feature fusion and reasoning are critical. \\nIn contrast, IntentNet addresses this gap through its explicit intention modeling and fine-grained feature integration. Specifically, IntentNet employs point-word-level fusion and enables the model to reason over verb-object relationships explicitly.\\n\\n[Experimental Evidence]\\nDue to resource constraints during the rebuttal period, we choose to conduct a comparison with LL3DA. The results, summarized in the table below, show that IntentNet significantly outperforms LL3DA. Although LL3DA uses point-level features, its Interactor3D module relies on an off-the-shelf detector during inference for bounding box proposals. Errors from this detector propagate to the final predictions, limiting its effectiveness.\\n\\n| Model | val [email protected] | val [email protected] | test [email protected] | test [email protected] |\\n|------------|-------------------|------------------|--------------------|-------------------|\\n| LL3DA | 5.74 | 6.13 | 4.98 | 5.43 |\\n| IntentNet | 58.34 | 40.83 | 58.92 | 42.28 |\\n\\n[Clarification on ReGround3D]\\nThank you for highlighting ReGround3D. As it is published within four months of our ICLR submission, it qualifies as a contemporaneous work under the ICLR 2025 policy, and we are not required to compare with it in our submission. Nevertheless, we will include its citation in the camera-ready version.\\n\\n[1] Evaluating Object Hallucination in Large Vision-Language Models, EMNLP 2023\\n\\n## **Response 2.2: Addressing Question 2 -- Advantages Compared to Large Language Models**\\n\\nWhile LLMs exhibit strong reasoning capabilities for pure language tasks, 3D intention grounding is a multimodal, fine-grained problem that requires joint reasoning across both vision and language. Relying solely on single-modality, language, for reasoning is insufficient.\\nFor example, consider the intention: *\\\"I want to sit down and rest.\\\"* In a *\\\"living room\\\"*, the target object might be a *\\\"sofa\\\"*, while in a *\\\"bedroom\\\"*, it could be a *\\\"chair\\\"* or *\\\"bed\\\"*. Without fusing multimodal information, such as scene context, language-only LLMs cannot accurately ground intentions in 3D space.\\nIn contrast, our IntentNet explicitly integrates fine-grained point-level features with text features, enabling verb-object pair reasoning to analyze intentions. This joint feature fusion and reasoning framework allows IntentNet to outperform LLMs in this multimodal fine-grained scenarios.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Only One Day Remaining, Please Take a Look at our Rebuttal!\", \"comment\": \"Dear Reviewer,\\n\\nThank you for dedicating your time reviewing our paper. As the discussion period deadline is approaching, we kindly invite any further comments or concerns you might have. Your feedback has been immensely valuable to us in refining the paper.\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"We appreciate all your suggestions that help us improve our paper. As the deadline for discussion is approaching, we would be glad to address any remaining questions or concerns. Please let us know if there are any further points you'd like to discuss!\"}", "{\"title\": \"[Urgent] ICLR Extends Rebuttal Period \\u2013 We Are Waiting For Your Response!\", \"comment\": \"Dear Reviewer,\\n\\nICLR has extended the rebuttal period to encourage your response. We sincerely appreciate the time and effort you have already invested, and we have actively addressed all your concerns in detail.\\n\\nSince **you have expressed the willingness to raise the score** and we have already completely replied, is there any remaining problem you want to ask? Notably, **Reviewer kRaB** acknowledged that **most of their issues have been addressed** and decide to **raise the score to an acceptance rate**. Similarly, **Reviewer qHNc** has expressed that **their concerns have been resolved** and decide to **maintain the acceptance rate**.\\n\\nYour response is critical for determining the outcome of our paper, and we value your input greatly!\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"[ICLR 2025] Last Day in ICLR Extended Discussion Period, Please Adjust Your Rate!\", \"comment\": \"Dear Reviewer,\\n\\nToday is the final day of the ICLR rebuttal period. We noticed that your confidence score is relatively low, and we sincerely hope you might consider aligning with **Reviewer qHNc**, who has higher confidence and set their score as 6.\\n\\n**Please adjust your rate** since the authors have completely responded your concerns and you have no more follow-up questions.\\n\\nFor your convenience, we have summarized our contributions and rebuttal in the comment: **Summary of Our Contributions and Rebuttal Responses**. \\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"## **Response 3.1: Addressing Weakness 1 -- Additional Comparisons and revision of typo.**\\n\\nTo provide more comparisons with current LLM-based methods, we compare our IntentNet with LL3DA. Furthermore, we also compare with GPT-4 + Detector Proposal, as another two-stage baseline. Specifically, we use a 3D object detector to proposal objects within the scene. Given the object proposals and the intention, we prompt the GPT-4 to infer the target object. Lastly, we compare with EDA + Detector Proposal, where the object proposals are appended to the intention sentence and input to the baseline, EDA. These results, summarized in the table below, consistently demonstrate the superiority of our IntentNet.\\n\\n| Model | val [email protected] | val [email protected] | test [email protected] | test [email protected] |\\n|------------|-------------------|------------------|--------------------|-------------------|\\n| LL3DA | 5.74 | 6.13 | 4.98 | 5.43 |\\n| IntentNet | 58.34 | 40.83 | 58.92 | 42.28 |\\n\\n| Model | [email protected] | [email protected] | [email protected] | [email protected] |\\n|--------------------|---------------|--------------|---------|--------|\\n| EDA + Proposal | 21.68 | 8.74 | 3.96 | 2.71 |\\n| GPT-4 + Proposal | 41.40 | 28.40 | 15.10 | 7.76 |\\n| IntentNet | 58.34 | 40.83 | 41.90 | 25.36 |\\n\\nAdditionally, we have corrected the typo regarding the detector names (*\\u201cGroupFree\\u201d*) in Tables 1 and 2 (page 8) in the revised paper. We highlight the revision in red color for your reference. Thank you for pointing this out!\\n\\n## **Response 3.2: Addressing Weakness 2 -- Additive Ablation Experiments**\\n\\nThank you for the suggestion to include additive ablation experiments. Below, we provide the results of such experiments:\\n\\n| Baseline | Verb | Verb2Obj | MatchBox | Adapt | [email protected] | [email protected] |\\n|----------|------|----------|----------|-------|---------------|--------------|\\n| \\u2713 | | | | | 52.32 | 33.39 |\\n| \\u2713 | \\u2713 | | | | 54.88 | 35.47 |\\n| \\u2713 | \\u2713 | \\u2713 | | | 55.98 | 36.10 |\\n| \\u2713 | \\u2713 | \\u2713 | \\u2713 | | 57.39 | 36.93 |\\n| \\u2713 | \\u2713 | \\u2713 | \\u2713 | \\u2713 | 58.34 | 40.83 |\\n\\nNotably, the inclusion of verb alone already provides a significant boost in performance, highlighting its importance in intention grounding. \\n\\n## **Response 3.3: Addressing Weakness 3 -- Performance on Traditional 3D Visual Grounding**\\n\\nThank you for raising this point. This work primarily focuses on 3D intention grounding, which is an orthogonal task to visual grounding. As such, IntentNet is specifically designed for intention grounding and is not evaluated on traditional visual grounding tasks.\\nMore concretely, IntentNet focuses on modeling the reasoning between verbs and objects inherent to human intentions, rather than the nouns or adjectives typically emphasized in visual grounding.\\nMoreover, we hope that the benchmark contributions and modeling insights presented in this paper can inspire future research on joint training approaches for both intention grounding and visual grounding tasks.\\n\\n## **Response 3.4: Addressing Question 1 -- Open Source**\\n\\nYes, we will open source our dataset and code. Thank you for your interest!\\n\\n## **Response 3.5: Addressing Question 2 -- Clarifying One-Stage Methods.**\\n\\nThank you for pointing this out. However, this seems to be a misunderstanding.\\n\\n[Our method] IntentNet is a one-stage method, as bounding box predictions are generated in an end-to-end manner rather than relying on a pre-trained detector for inference.\\nAs stated in Line 309 of the paper, the pre-trained detector is only used to provide additional visual references, and the box matching with text is implemented as an auxiliary loss, which does not participate in the actual inference process.\\n\\n[Other one-stage methods] Moreover, we have already compared IntentNet with existing one-stage methods, such as BUTD and EDA. These models also directly fuse 3D data and text to predict bounding boxes, using a pre-trained detector in a similar way to just provide visual references.\\n\\n## **Response 3.6: Addressing Question 3 -- Training Hardware and Time Cost**\\n\\nThank you for your question. The training of IntentNet was conducted on 4 NVIDIA A6000 GPUs. The training process takes approximately 24 hours.\"}", "{\"title\": \"Only One Day Remaining, Please Take a Look at our Rebuttal!\", \"comment\": \"Dear Reviewer,\\n\\nThank you for dedicating your time reviewing our paper. As the discussion period deadline is approaching, we kindly invite any further comments or concerns you might have. Your feedback has been immensely valuable to us in refining the paper.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"[Urgent] ICLR Extends Rebuttal Period \\u2013 We Are Waiting For Your Response!\", \"comment\": \"Dear Reviewer,\\n\\nICLR has extended the rebuttal period to encourage your response. We sincerely appreciate the time and effort you have already invested, and we have actively addressed all your concerns in detail.\\n\\nMay we ask if you have any remaining questions or issues for clarification? Notably, **Reviewer kRaB** acknowledged that **most of their issues have been addressed** and decide to **raise the score to an acceptance rate**. Similarly, **Reviewer qHNc** has expressed that **their concerns have been resolved** and decide to **maintain the acceptance rate**.\\n\\nYour response is critical for determining the outcome of our paper, and we value your input greatly!\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"We appreciate all your suggestions that help us improve our paper. As the deadline for discussion is approaching, we would be glad to address any remaining questions or concerns. Please let us know if there are any further points you'd like to discuss!\"}", "{\"comment\": \"Thanks for the response. The authors addressed my issues. I plan to keep my original rating.\"}", "{\"comment\": \"We appreciate all your suggestions that help us improve our paper. As the deadline for discussion is approaching, we would be glad to address any remaining questions or concerns. Please let us know if there are any further points you'd like to discuss!\"}", "{\"summary\": \"This paper introduces a new task named 3D-intention grounding, which is 3D object-detection from direct human-intentions. The paper collects Intent3D dataset which includes 1042 scenes from ScanNet and corresponding paired human-intentions questions and 3D object detections answers. IntentNet is proposed to tackle 3D-intention grounding task by candidate box-matching, verb-object alignment and cascaded adaptively learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a clear motivation for 3D-intention grounding, and it includes clear illustrations and presentations of dataset collection procedure.\\n\\n2. Soundness of each component design of the IntentNet, and thoroughly ablations on each component of the proposed pipeline design.\\n\\n3. Extensive experiments and discussions demonstrate the effectiveness of the proposed framework compared to different types of baselines.\", \"weaknesses\": \"Major concern:\\n\\nI am concerned about possible baseline unfair comparison in the experiment section. Most baselines are designed to tackle nouns-types of questions instead of human-intention types of questions. What if we pass the question to a finetuned LLM and let it infers what types of nouns/objects the question is targeting at from possible objects in a scene detected by existed 3D object detectors? The possible performance of these baselines might be much higher after it is given the object it is expected to detect in a scene.\", \"minor_concern\": \"It would be interesting if the author can provide some cases where IntentNet fails but other models succeed. Particularly if other models are fed with object/noun directly.\", \"questions\": \"Please see the weakness section.\\n\\nI am willing to increase my score if the author resolves my major concern with some additional baseline experimentations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Urgent] Please encourage the reviewers to participate in the rebuttal discussion.\", \"comment\": \"Dear Area Chair,\\n\\nAs the extended Discussion Period is nearing its end, I kindly request your assistance in prompting **[** Reviewer gGZj **]** and **[** Reviewer tzNR **]** to respond to my comments. \\n\\nNotably, \\n- **The only two responseive reviewers**, **[** Reviewer kRaB **]** and **[** Reviewer qHNc **]**, **decide to accept** my paper after reviewing my rebuttal. **[** Reviewer qHNc **]** has **the highest confidence score** toward our paper.\\n- **[** Reviewer gGZj **]** had previously indicated **a willingness to raise their score** based on my rebuttal and has **the highest confidence rate** among the reviewers who gave a score of 5. But they remain unresponsive.\\n- **[** Reviewer tzNR **]** remains unresponsive and has the lowest confidence rate.\\n\\nDespite my repeated attempts to engage **[** Reviewer gGZj **]** and **[** Reviewer tzNR **]**, I have not yet received any response. Your help in expediting their feedback would be greatly appreciated.\\n\\nThank you for your time and support.\\n\\nBest regards,\"}", "{\"title\": \"Only One Day Remaining, Please Take a Look at our Rebuttal!\", \"comment\": \"Dear Reviewer,\\n\\nThank you for dedicating your time reviewing our paper. As the discussion period deadline is approaching, we kindly invite any further comments or concerns you might have. Your feedback has been immensely valuable to us in refining the paper.\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"## **Response 1.1: Addressing Major Concern -- Comparison with LLM + Detector**\\n\\nThank you for raising this thoughtful concern regarding the potential baseline involving LLM + Detector and the fairness of our comparisons.\\n\\n[Baseline] To address this, we want to highlight that in Line475, we mention that we have already conducted the LLM + Detector baseline, as detailed in Appendix (A.6 page 17). Specifically, we use a detector to propose objects in the scene. Given the proposed objects and the intention, we prompt one of the most powerful LLMs, GPT-4, to infer which types of objects the question targets. As shown in Table 5 (page 17) or the table below, this two-stage method performs worse than our approach due to detector errors and hallucinations from GPT-4.\\n\\n| Model | [email protected] | [email protected] | [email protected] | [email protected] |\\n|--------------------|---------------|--------------|---------|--------|\\n| GPT-4 + Proposal | 41.40 | 28.40 | 15.10 | 7.76 |\\n| IntentNet | 58.34 | 40.83 | 41.90 | 25.36 |\\n\\n[Fairness] Furthermore, we would like to clarify that 3D-VisTA and Chat3D-v2 in our comparisons are general-purpose models designed to handle general language, not specifically for noun-based language. As such, we believe that our comparisons with these models are fair.\\n\\n\\n## **Response 1.2: Addressing Major Concern -- Comparison with baseline + provided objects**\\n\\nTo address the concern of whether the baseline might be improved given the objects expected to detect in a scene, we conducted an additional experiment, indicated as \\u201cEDA + Proposal\\u201d. Here, nouns/objects proposed by a detector are appended to the intention sentence as the input for EDA. The results are as below:\\n\\n| Model | [email protected] | [email protected] | [email protected] | [email protected] |\\n|------------------|---------------|--------------|---------|--------|\\n| EDA + Proposal | 21.68 | 8.74 | 3.96 | 2.71 |\\n| EDA | 43.11 | 18.91 | 14.02 | 5.00 |\\n| IntentNet | 58.34 | 40.83 | 41.90 | 25.36 |\\n\\nAs shown in EDA + Proposal, directly providing nouns/objects expected to detect to EDA does not improve its performance, but rather introduces noise due to detector errors, which interferes model's reasoning. Actually, existing baselines (e.g., EDA, BUTD-DETR) have already use detectors to obtain \\\"objects expected to detect\\\" indirectly.\\n\\n## **Response 1.3: Addressing Minor Concern -- Cases where IntentNet fails but others fed with objects succeed**\\n\\nThank you for this suggestion. We actually have already provided some failed cases of our IntentNet in the Appendix (Section A.2, Fig. 6, page 16). In our revised paper, we further include visualizations of failed cases in Figure 8 (page 18). We highlight the title of the section in red color for your reference.\\nSpecifically, we observed that our model sometimes struggles with intentions involving multiple actions. For example, consider the intention:\\n*\\\"I need to cool down on hot days while sitting in the living room.\\\"*\\nIn this case, the primary intention is *\\\"cool down\\\"*, which targets the object *\\\"fan\\\"*. However, the secondary action *\\\"sitting\\\"* acts as a distractor, leading our model to incorrectly detect *\\\"couch\\\"* (blue bounding box).\\nWhen we directly provide ground truth object noun to EDA, the task becomes simplified, allowing it to correctly detect the *\\\"fan\\\"* (green bounding box). \\nWe have included additional visualized examples in Figure 9 (page 18) for further reference.\"}", "{\"title\": \"[ICLR 2025] Only Two Days Left in ICLR Extended Discussion Period, Please Take a Look at the Rebuttal!\", \"comment\": \"Dear Reviewer,\\n\\n**We have addressed all your concerns**, including comparison with baselines and failed cases analysis. As **you previously indicated a willingness to raise your score**, we kindly ask if there is anything further you want us to clarify to support this adjustment.\\n\\n*(Ignoring our rebuttal not only diminishes the purpose of the discussion period but also undermines the collaborative spirit of our ICLR community. Additionally, it discredits the sustained effort we have invested over the past three weeks in addressing all your concerns.)*\\n\\n**PLEASE**, we sincerely hope to hear from you soon and appreciate your engagement in this important step.\"}", "{\"title\": \"Thanks for the acceptance rate!\", \"comment\": \"We appreciate your recognition that our rebuttal addressed your concerns and your decision to maintain the acceptance rate!\"}", "{\"title\": \"[ICLR 2025] Last Day in ICLR Extended Discussion Period, Please Adjust Your Rate!\", \"comment\": \"Dear Reviewer,\\n\\nToday is the last day of ICLR rebuttal. Given that you previously indicated a willingness to raise your score, **please adjust your rate** since the authors have completely responded your concerns and you have no more follow-up questions.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Summary of Our Contributions and Rebuttal Responses\", \"comment\": [\"Dear PCs, ACs, and Reviewers,\", \"We have submitted detailed responses during the rebuttal stage.\", \"**We appreciate the recognition from Reviewer qHNc & kRaB, *currently the only two responsive reviewers*, for acknowledging that our rebuttal addressed their concerns and for their decision on maintain/raise to the acceptance rate.**\", \"Unfortunately, we have not yet received the other two reviewers' feedback. It is worth noting that **[** Reviewer gGZj **]** **indicated a willingness to raise their score based on the rebuttal**. We also invested significant resources, including renting numerous servers at considerable expense, to conduct the experiments requested by the reviewers. It is disappointing that we have not received the rest reviewers' responses. Your attention to this matter is greatly appreciated.\", \"To facilitate further discussion among PCs, ACs, and reviewers, and to assist in finalizing the decision regarding the acceptance of our paper, we would like to briefly recapitulate our contributions and highlight key responses to the questions raised by **[** Reviewer gGZj (R1), Reviewer tzNR (R2), Reviewer kRaB (R3), and Reviewer qHNc (R4) **]**:\", \"### **Summary:**\", \"**(Contribution)**\", \"In this paper, we propose a new task, 3D Intention Grounding (3D-IG), for detecting objects in 3D real-world environments based on human intention instructions. We greatly appreciate **[** R1 (Strengths 1), R2 (Strengths 1), and R4 (Strengths 1) **]** recognize our **clear motivation** and its **contribution to our community**.\", \"To systematically study this new problem, we develop the Intent3D dataset to support both the training and benchmarking of 3D-IG. We are thankful that **[** R1 (Strengths 1) **]** acknowledge our dataset collection procedure, **[** R2 (Strengths 2) **]** describe our **dataset as high-quality**, **[** R3 (Strengths 2) **]** emphasize the **dataset\\u2019s significant contribution**, and **[** R4 (Strengths 2) **]** note that the **dataset is extensive** and **provides a valuable resource** for this task.\", \"After building the benchmark, we conduct a comprehensive evaluation of the existing methods. We appreciate **[** R1 (Strengths 2) **]** for recognizing our discussions of **extensive baselines**.\", \"**(Soundness)**\", \"We propose IntentNet, a strong baseline that achieves SOTA on Intent3D. IntentNet introduces three novel modules: Candidate Box Matching, Verb-Object Alignment, and Cascaded Adaptive Learning. We are grateful for the recognition of IntentNet\\u2019s **soundness and effectiveness** from **[** R1 (Strengths 2), R4 (Strengths 3) **]**.\", \"Finally, to validate the effectiveness of each module in IntentNet, we conducted extensive ablation studies. We thank **[** R1 (Strengths 2 & 3) **]** for acknowledging our **thorough and extensive ablation study** as well as the **effectiveness** of each module.\", \"**(Presentation)**\", \"We appreciate **[** R1 (Strengths 1) **]** for recognizing our **clear presentation** and **[** R3 (Strengths 1) **]** for acknowledging our **paper's overall high quality** with **clear writing** and **ease of understanding**.\", \"### **Rebuttal:**\", \"**(New experiments or results)**\", \"Added experiments comparing with LLM + detector [Response 1.1, 3.1, 4.1].\", \"Added experiments comparing with baseline + detector [Response 1.2, 3.1].\", \"Added experiments comparing with LLM-based methods [Response 2.1, 3.1].\", \"Conducted additional ablation studies [Response 3.2].\", \"Included analysis of failed cases [Response 1.3].\", \"Provided detailed demonstrations [Response 4.3].\", \"**(Clarify misunderstandings)**\", \"Clarified our data selection [Response 2.3].\", \"Clarified our data diversity [Response 2.3].\", \"Clarified the comparison with one-stage methods [Response 3.5].\", \"**(Explain questions)**\", \"Explained our advantages over other methods [Responses 2.1, 2.2].\", \"Explained the fairness of comparisons [Responses 1.1, 2.1].\", \"Explained the effectiveness of our dataset [Responses 2.3, 4.2].\", \"Explained potential improvements [Response 2.4].\", \"Explained validation on visual grounding tasks [Response 3.3].\", \"Explained open-source plans and experimental details [Responses 3.4, 3.6].\", \"Explained the design of dataset construction [Response 4.2].\", \"Explained the scaling problem [Response 4.4].\"]}", "{\"comment\": \"We appreciate all your suggestions that help us improve our paper. As the deadline for discussion is approaching, we would be glad to address any remaining questions or concerns. Please let us know if there are any further points you'd like to discuss!\"}", "{\"summary\": \"This paper presents a novel framework for 3D object detection that integrates human intention into the detection process. The authors introduce the Intent3D dataset, aiming to enhance the model's understanding of human needs in real-world scenarios. The proposed method, named IntentNet, employs a multi-instance detection approach, where the model is tasked with identifying multiple instances of objects based on free-form textual descriptions of human intentions. The authors evaluate their approach against several baselines, including expert models designed for 3D visual grounding, foundation models for generic 3D understanding tasks, and Large Language Model (LLM)-based models. The evaluation demonstrates the effectiveness of IntentNet in achieving state-of-the-art performance on the Intent3D benchmark.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of the 3D Intention Grounding (3D-IG) task could contribute to the 3D visual grounding community.\\n2. The proposed Intent3D dataset is extensive, comprising 44,990 intention texts linked to 209 fine-grained object classes from 1042 3D scenes. This dataset provides a valuable resource for training and evaluating models in the context of human intention.\\n3. The proposed modules in IntentNet are generally technically sound.\", \"weaknesses\": \"1. Object detection based on human intention is indeed a new task. However, why should we need to have a dataset and a model dedicated to this task? I think the detection based on human intention can be achieved in a modulized manner. For example, first, let the 3D detection module detect all types of objects in the scene. Then ask LLM to decide the subsets of the detected objects that can fulfill the human intention. I think this could be more flexible compared with training a dedicated detection model based on human intention prompts.\\n2. In L210, it is mentioned that around six intention texts are generated per object. How do you determine this number (six)? Can it guarantee that all possible intentions can be covered and trained well?\\n3. The contents around Eq (3) are hard to understand for me. What is t in Eq(3)? Although the authors claimed that the code would be released to facilitate the understanding, it would be better to add a figure to illustrate the connections in the network. \\n4. Figure 4 is too abstract. Even though many connections and modules are included, it is not very informative. I suggest more detailed diagrams can be provided for the key modules further. \\n5. Figure 5 shows that the verb alignment is very helpful for the prediction quality. Do you think this is caused by the limited training data? If more data is available for training, would this module still be essential?\", \"questions\": \"Please refer to the weakness part/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## **Response 2.3: Addressing Weakness 2&3 and Question 3&4 -- Object Selection and Dataset Diversity**\\n\\nWe believe there might be some misunderstanding regarding the *non-trivial objects* and *object diversity* in our dataset.\\n\\n[Non-Trivial Objects]\\nAs stated in Line 187, non-trivial objects are filtered on a per-scene basis. This means an object considered trivial in one scene might not be trivial in another. For example, *\\\"chairs\\\"* are commonly used objects. In a *\\\"conference room\\\"* scene, where chairs are abundant, they are deemed trivial and excluded to improve dataset quality and difficulty. However, in a *\\\"bedroom\\\"* scene, chairs are not trivial and are retained in the dataset. Therefore, our data does have intentions toward commonly used objects.\\nTo further clarify, we have added visualizations of IntentNet\\u2019s grounding on trivial objects in Appendix A.8 (page 19), with the section title highlighted in red for your reference. These examples demonstrate that our model can accurately ground trivial objects when required.\\n\\n[Object Diversity]\\nRegarding the diversity of objects, *Figure 3 (d) and (e)* do not represent all object categories in our dataset. As indicated in Line 253, the x-axis shows *\\u201cevery tenth data point\\u201d* for clarity. Our dataset actually includes *209 fine-grained classes*, as mentioned in Line 241.\\nFor additional details, we provide a detailed version of *Figure 3 (d) and (e)* and an additional histogram of object category in Appendix A.9 (page 20), with the section title also highlighted in red for your reference. These supplementary statistics demonstrate the richness and diversity of our dataset across object categories.\\nIt is worth noting that, in contrast, Referit3D is a commonly used 3D Visual Grounding dataset, and they only have 76 types. \\n\\n[Experimental Validation]\\nAs shown in Table 1&2, we have constructed a validation set and a test set to benchmark our 3D intention grounding, in which our IntentNet serves as a strong baseline for current benchmark. Specifically, IntentNet achieves 58.34% [email protected] in the validation set and 58.92% [email protected] in the test set, which provides sufficient experimental validation to show that our model has learned effectively. \\nIn Appendix A.10 (page 21) with the section title highlighted in red for your reference, we further provide the Accuracy curve and AP curve of training process of IntentNet, which finally converge and achieve strong performances, indicating that our model has well trained by the training set.\\n\\n## **Response 2.4: Addressing Question 3 -- Consideration of Fine-Grained Descriptions for trivial Objects**\\n\\nThank you for this intriguing suggestion! Introducing fine-grained intentions for those trivial objects could indeed be a viable approach, if the intentions are sufficiently specific to make them unambiguous.\", \"here_we_provide_a_feasible_example\": \"Consider a large living room containing numerous tables. By calculating the number of points of each table, we could select the largest table. For an intention such as *\\u201cI want to find the most comfortable place to put all the food for sharing with friends\\u201d*, this *\\\"table\\\"* would no longer be considered trivial, as its larger size makes it distinct from other tables and suitable for this intention.\\nWe appreciate this perspective and will incorporate such considerations into the next version of our dataset to further enhance its complexity. However, as highlighted in *Response 2.3*, our current dataset already includes a diverse and extensive range of objects, ensuring comprehensive coverage for this task.\"}", "{\"title\": \"[ICLR 2025] Only Two Days Left in ICLR Extended Discussion Period, Please Take a Look at the Rebuttal!\", \"comment\": \"Dear Reviewer,\\n\\n**We have addressed all your concerns**, including comparison with other methods and clarification of dataset construction. **As your confidence score is low**, we kindly encourage you to review our rebuttal carefully and consider making a more deliberate decision based on our rebuttal.\\n\\n*(Ignoring our rebuttal not only diminishes the purpose of the discussion period but also undermines the collaborative spirit of our ICLR community. Additionally, it discredits the sustained effort we have invested over the past three weeks in addressing all your concerns.)*\\n\\n**PLEASE**, we sincerely hope to hear from you soon and appreciate your engagement in this important step.\"}", "{\"comment\": \"## **Response 4.3: Addressing Weakness 3 & 4 -- Explain Eq(3) and provide a detailed figure.**\\n\\nThank you for your comment regarding Eq. (3). We appreciate the opportunity to clarify it step by step for the meaning of $t$.\\nIn Eq. (3), $T$ indicates all the text tokens; $T_l$ is the subset of $T$, which indicates the verb tokens among the text tokens. Therefore, the $t$ of \\u201c$t \\\\in T$\\u201d indicates a single text token which could be a verb or not, and t of \\u201c$t \\\\in T_l$\\u201d is a single text token which is a verb.\\nTo provide further clarity, we have included a more detailed figure in the Appendix (A.11, page 22) of the revised paper. We highlight the title of the section in red color for your reference. This figure aims to illustrate the connections in the network and the key modules of our loss functions.\\n\\n## **Response 4.4: Addressing Weakness 5 -- On the assumption of scaling effect of Verb Alignment.**\\n\\nThank you for raising this insightful question. We think that the effectiveness of the Verb Alignment comes from its dense supervision signal instead of limited data. And we assume it can further bring improvement if we could scale the data in the future.\\n\\nFirstly, in the 3D domain, acquiring large-scale real-world point cloud data is a significant challenge, as it is not as easily accessible as 2D data from the internet. In this context, exploring breakthroughs to make the best of available labeled data becomes crucial, such as our verb alignment.\\n\\nSecondly, even if 3D real-world data were to become more abundant in the future, we believe that verb alignment will continue to be effective. This is because verb alignment enhances the utilization of supervision signals from individual sentences. \\n\\nFor traditional detection losses, detecting the target does not necessarily require a comprehensive understanding of the entire intention sentence. For example, in the intention *\\u201cI want to sit comfortably while listening to music,\\u201d* the target is *\\u201cchair.\\u201d* The model may only need to focus on the verb *\\u201csit\\u201d* to infer the target, but losses from verb alignment encourage a more holistic understanding of the sentence, including the understanding of all verb-object links, e.g. *\\u201clistening to music\\u201d*. As a result, the supervision signal becomes denser, improving the model's overall reasoning capabilities.\"}", "{\"metareview\": [\"This paper introduces the 3D Intention Grounding (3D-IG) task, which aims to detect objects in 3D scenes based on human intention instructions. To support this task, the authors develop the Intent3D dataset and propose a method called IntentNet, which consists of three modules: candidate box matching, verb-object alignment, and cascaded adaptive learning.\", \"Initial reviewer concerns focused on several aspects of the paper, including:\", \"Unfair baseline comparisons (gGZj)\", \"The motivation behind the 3D-IG task and the Intent3D dataset (gGZj)\", \"The contribution of the verb-object alignment module (qHNc)\", \"Clarifications regarding the advantages over large language models (LLMs) (tzNR)\", \"Object selection and dataset diversity (tzNR)\", \"The training process (kRaB)\", \"Issues with equations and figures (qHNc)\", \"Request for failure case analysis (gGZj)\", \"The need for additional baseline comparisons (tzNR, kRaB)\", \"The request for further ablation studies (kRaB)\", \"Evaluation on traditional visual grounding tasks (tzNR)\", \"The authors addressed these concerns comprehensively in their rebuttal, providing clarifications and additional experimental results where requested. Reviewer qHNc acknowledged that their concerns had been addressed, while the other three reviewers (gGZj, tzNR, and kRaB) did not respond to the rebuttal. After carefully reviewing the authors' responses and the remaining concerns, the AC believes that most issues have been adequately addressed.\", \"Given that Intent3D represents a new and promising task for embodied AI, along with its contribution of a novel dataset and a strong baseline method, the AC recommends accepting this paper.\"], \"additional_comments_on_reviewer_discussion\": \"Initial reviewer concerns focused on several aspects of the paper, including:\\n\\n1. Unfair baseline comparisons (gGZj)\\n2. The motivation behind the 3D-IG task and the Intent3D dataset (gGZj)\\n3. The contribution of the verb-object alignment module (qHNc)\\n4. Clarifications regarding the advantages over large language models (LLMs) (tzNR)\\n5. Object selection and dataset diversity (tzNR)\\n6. The training process (kRaB)\\n7. Issues with equations and figures (qHNc)\\n8. Request for failure case analysis (gGZj)\\n9. The need for additional baseline comparisons (tzNR, kRaB)\\n10. The request for further ablation studies (kRaB)\\n11. Evaluation on traditional visual grounding tasks (tzNR)\\n\\nThe authors addressed these concerns comprehensively in their rebuttal, providing clarifications and additional experimental results where requested. Reviewer qHNc acknowledged that their concerns had been addressed, while the other three reviewers (gGZj, tzNR, and kRaB) did not respond to the rebuttal. After carefully reviewing the authors' responses and the remaining concerns (points 1,2,4-6,8-11), the AC believes that most issues have been adequately addressed. \\n\\nConsidering Intent3D represents a new and promising task for embodied AI, along with its contribution of a novel dataset and a strong baseline method (agreed by most reviewers), the AC recommends accepting this paper.\"}", "{\"summary\": \"This paper introduces the task of 3D Intention Grounding (3D-IG), which aims to automate the reasoning and detection of target objects in real-world 3D scenes using human intention cues. To this end, the authors constructed the Intent3D dataset, comprising 44,990 intention texts across 209 fine-grained object categories, and developed several baseline models to evaluate various 3D object detection techniques. Finally, the authors proposed a novel method, IntentNet, which optimizes intention understanding and detection tasks through techniques such as verb-object alignment and candidate box matching, achieving state-of-the-art performance on the Intent3D benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1: A new task in 3D object detection employing RGB-D, based on human intention, facilitates smoother and more natural communication between humans and intelligent agents.\", \"2\": \"Current large language models can also directly infer human intentions; what are the advantages of this paper compared to them?\\n\\n3\\uff1aWhen filtering Non-trivial Objects, objects with more than six instances in fine-grained categories are directly removed, which may lead to the exclusion of commonly used objects. Could we consider adding more fine-grained descriptions for these objects instead of outright deletion?\", \"weaknesses\": \"1: There has been a few methods to combine 3D scene understanding with LLM beyond Chat3D v2, such as LL3DA, Grounded 3D-LLM, ReGround3D and so on. The paper does not highlight the advantages compared to them.\", \"3\": \"The limited variety of object category included in the dataset, fails to demonstrate the grounding effect for the missing objects in the dataset.\", \"questions\": \"1\\uff1aThere has been a few methods to combine 3D scene understanding with LLM beyond Chat3D v2, such as LL3DA, Grounded 3D-LLM, ReGround3D and so on. What are the advantages of this paper compared to theirs? More experimental evidence is needed to demonstrate the advantages of the this paper over other methods.\", \"4\": \"Figures 3 (d) and (e) indicate that the dataset lacks sufficient diversity in the types of objects included. Experimental validation is necessary to determine whether the variety of object types included in the dataset is sufficient for the model to learn effectively.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a 3D Intention Grounding (3D-IG) task and constructs a novel dataset called Intent3D (sourced from ScanNet data and generated using GPT). Additionally, it proposes a baseline model named IntentNet. I am not an expert in 3D-related fields, so my confidence in this review is not very high.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The overall quality of the paper is high, with clear writing and easy-to-understand presentation.\\n2. The contribution of the dataset is significant, as it is the first to construct a 3D detection task focused on intention understanding.\", \"weaknesses\": \"1. More comparisons with recent works should be provided in Tables 1 and 2. Additionally, there is a minor mistake: the detector names \\u201cGroupFree\\u201d and \\u201cGroup-Free\\u201d in the first two rows of Tables 1 and 2 do not match.\\n2. The article gives a subtractive ablation experiment. I would like to see an additive ablation experiment, such as how the effect of verb alone works.\\n3. The article does not give the performance of the proposed IntentNet in traditional 3D grounding.\", \"questions\": \"1. Will the proposed dataset and code be open-sourced?\\n2. IntentNet seems to be a two-stage approach, where a pre-trained 3D detector first extracts proposals, which are then matched with text. Are there any one-stage methods that directly fuse 3D data and text to generate boxes? If so, the paper does not seem to provide comparisons with such methods.\\n3. Regarding the training process, on what hardware was the method trained, and how long did the training take?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
5GauLpaNGC
Task Characteristic and Contrastive Contexts for Improving Generalization in Offline Meta-Reinforcement Learning
[ "Hongcai He", "Anjie Zhu", "Zetao Zheng", "Paul Weng", "Jie Shao" ]
Context-based offline meta-reinforcement learning (meta-RL) methods typically extract contexts summarizing task information from historical trajectories to achieve adaptation to unseen target tasks. Nevertheless, previous methods may lack generalization and suffer from ineffective adaptation. Our key insight to counteract this issue is that they fail to capture both task characteristic and task contrastive information when generating contexts. In this work, we propose a framework called task characteristic and contrastive contexts for offline meta-RL (TCMRL), which consists of a task characteristic extractor and a task contrastive loss. More specifically, the task characteristic extractor aims at identifying transitions within a trajectory, that are characteristic of a task, when generating contexts. Meanwhile, the task contrastive loss favors the learning of task information that distinguishes tasks from one another by considering interrelations among transitions of trajectory subsequences. Contexts that include both task characteristic and task contrastive information provide a comprehensive understanding of the tasks themselves and implicit relationships among tasks. Experiments in meta-environments show the superiority of TCMRL over previous offline meta-RL methods in generating more generalizable contexts, and achieving efficient and effective adaptation to unseen target tasks.
[ "Reinforcement Learning", "Meta-Reinforcement Learning" ]
Reject
https://openreview.net/pdf?id=5GauLpaNGC
https://openreview.net/forum?id=5GauLpaNGC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t6fD17pKBr", "os3O6rPJ8Y", "moscM3afY9", "moJaKgid3F", "hXJbpUcpPR", "S771zqMf6V", "OpfUS6zk7q", "OHZasodUFM", "OE42mx0sAj", "MGZGJ4aDqK", "IMyY5y4MRv", "IAtxxxwXsr", "HnrWmKM9Pf", "DW14888eYg", "3mChIxV2Zd", "0GdKHHhh94" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730717463662, 1732568727032, 1730723138549, 1732638176266, 1734687045303, 1731936832337, 1732680529649, 1731936257323, 1731936136081, 1731934496352, 1731936670489, 1737524143827, 1731935698068, 1732511138162, 1730352499628, 1732671781716 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11750/Reviewer_QaPj" ], [ "ICLR.cc/2025/Conference/Submission11750/Reviewer_QaPj" ], [ "ICLR.cc/2025/Conference/Submission11750/Reviewer_4bUz" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Area_Chair_JFnV" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Authors" ], [ "ICLR.cc/2025/Conference/Submission11750/Reviewer_SQK4" ], [ "ICLR.cc/2025/Conference/Submission11750/Reviewer_4bUz" ] ], "structured_content_str": [ "{\"summary\": \"This work targets the problem of offline meta RL by learning a context of a task information from trajectories so that the learned context encoder can quickly capture characteristics of an unseen test task with limited interactions. Specifically, the authors propose to learn such context encoder by conditioning a reward neural network on a weighted aggregation of transition encodings in a trajectory. The authors also propose to train the context vector by penalising rewards prediction when conditioned on a reversed weighted version of context. This work also leverages contrastive learning to train transition encoding.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The motivation behind learning both task characteristic and task contrastive information for better meta generalisation is reasonable.\\n\\nThe proposed method is evaluated on many meta RL environments and empirical results show improved performance.\", \"weaknesses\": \"The proposed method in this work consists of many components around the context encoder training. However, It is unclear to me what is the fundamental technical reason behind these kinds of design and why these specific designs can achieve desired behaviour of the context encoder. There are many explanations in the method part in section 4 but they are not well structured in logic and look very lengthy:\\n\\n(1) Line 38: \\u201cas only a few key transitions within the trajectory provide the main task characteristic information\\u2026\\u201d This is to say many other transitions do not distinguish tasks. I have concern over this statement as this is only probably correct when the tasks have some property like a hierarchical structure. In general when the dynamics of a target task has a consistent shift on the entire state space, such sparsity prior would not be beneficial.\\n\\n(2) It is unclear to me why Eq. (7) and Eq. (8) can lead to learning a context encoder such that the task characteristic extractor q can capture task unique transitions. The neural network is probably able to capture task conditioned reward with a task-level context without learning relations in terms of tasks transitions. In my opinion, the network does not promote the correct importance score of c_i. It probably only makes c_i and c_i^neg different and that is enough to learn a conditional reward function under Eq. (7) and (8).\\n\\n(3) Are r and r_reverse in Eq. (7) and Eq. (8) the same neural network with same parameters? \\n\\n(4) It seems that Eq. (7) and Eq. (8) only capture the task shift in terms of reward function while the transition dynamics is ignored (no loss function in terms of next state prediction). Can authors please explain the reason?\\n\\nOverall, the proposed method consists of several modified versions of previous loss functions and is also combined with existing contrastive learning technique. The technical novelty is not strong and there is no theoretical analysis on why the proposed objective function can guarantee generalisation in a meta learning setting.\", \"questions\": \"Please see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the authors\", \"comment\": \"I would like to thanks the authors for their detailed responses.\\n\\nRegarding the \\u201cWeakness about fundamental technical reason.\\u201d: I agree with the authors that this work has a clear high-level motivation. This is also one of the strengths I have mentioned in the review. However, this reasonable overall motivation does not fully support the *technical* reason behind each design choice of many loss functions proposed in the technical part, i.e., these high-level goals do not automatically translate to these loss functions. The proposed loss function might improve the model in terms of these goals, but the linkage here is not strong. This concern of intuitive modelling is also shared with reviewer SQK4.\\n\\nRegarding W1, I would like to thank the authors for their further explanations. It seems that consistent shift is not that easy and offline meta-RL methods can be limited in this setting, but overall I think this is not a very big problem during this phase. I would suggest the authors improve the wording in this paragraph (Line 238) by pointing out what the potential successful or failure cases are for this design. For W2, the explanations do not solve my concern. It is still unclear to me why these two loss functions can learn correct attention. I can understand that learning with Eq. (7) or (8) alone obviously fails but the response did not answer whether simply making c_i and c_i^neg different is enough for the minimization of these two loss function. \\u201cin Eq. (7), if less important transitions are assigned high attention weights, accurate reward estimations become difficult. Meanwhile, in Eq. (8), if important transitions \\u2026\\u201d are there any theoretical or empirical justifications for these two points in this work? For W4, I would like to thank the authors for results of comparison. It seems that these results do not provide any insights regarding different choices and it might be interesting to investigate the reason behind this.\"}", "{\"summary\": \"The paper proposes a framework called Task Characteristic and Contrastive Contexts for Offline Meta-Reinforcement Learning (TCMRL), which aims to enhance the generalization ability of context-based offline meta-RL methods. TCMRL introduces two key components: a task characteristic extractor and a task contrastive loss, which work together to generate more comprehensive contexts by capturing both characteristic and contrastive task information. The task characteristic extractor uses an attention mechanism to emphasize transitions that are crucial for characterizing a task, while the task contrastive loss helps to distinguish different tasks by exploring interrelations among trajectory subsequences. Experiments demonstrate that TCMRL significantly improves adaptation to unseen tasks, outperforming existing offline meta-RL methods on multiple benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. TCMRL brings a fresh perspective by dynamically disentangling characteristic features from the trajectories while also maximizing interrelations among tasks.\\n\\n2. The paper is written clearly, with a logical structure that makes it easy for readers to follow the flow of ideas. Key concepts such as task characteristic and contrastive information are well explained, with visual aids like figures and pseudocode to help illustrate the framework.\\n\\n3. TCMRL improves the generalization capability of context-based offline meta-RL.\", \"weaknesses\": \"1. Some results are reported as normalized scores, e.g. Table 1. However, there is no explanation for how normalization is processed.\\n\\n2. Although context shift is highlighted as one of the primary issues that TCMRL aims to solve, there is no in-depth analysis of how TCMRL reduces context shift compared to other methods, and potential limitations.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thank you for your response. We appreciate the discussion!\\n\\nThank you for your recognition and appreciation of our response to W1. Our sparsity prior setup is common in other fields, especially in data-driven approaches [1, 2, 3, 4]. We will improve our presentation and add relevant potential successful or failure cases in the subsequent version.\\n\\n**Regarding the \\\"Weakness about fundamental technical reason\\\":** \\n\\nTo optimize our task characteristic extractor, we propose loss functions from three perspectives: positive and negative reward estimation, and sparsity in attention weights. These loss functions are grounded in our sparsity prior, which is supported by theoretical foundations (Arjona-Medina et al., 2019; Faccio et al., 2022).\\n\\nAs mentioned in Lines 240-245, Section 4.1.1, the motivation for our L_{TCE}^{spa} is to assign higher attention weights to transitions that represent task characteristics, rather than to general transitions containing redundant information.\\n\\nThe motivation for our L_{TCE}^{pos} is to guide our task characteristic extractor to identify and emphasize only a few transitions that are task characteristics, enabling contexts that comprehensively reflect the task characteristic information of a particular task. This loss function ensures c_i achieves accurate reward estimation for both important and general transitions, forcing the task characteristic extractor to focus on task characteristic transitions.\\n\\nThe motivation for our L_{TCE}^{neg} is to optimize the task characteristic extractor from the opposite perspective. Our primary goal is to ensure that c_{i}^{neg}, which focuses on redundant information from general transitions, fails to achieve accurate reward estimation, thereby guiding the task characteristic extractor to assign low attention weights to such transitions. To achieve this, as described in Lines 277-291, Section 4.1.1, we design a negative target {r'}_{i}^{neg} and induce the reward estimation conditioned on c_{i}^{neg} to approximate this incorrect target.\\n\\n**Regarding W2:**\\n\\nIn Eq. (7) and Eq. (8), two distinct reward estimation targets are used: {r'}_{i}^{t}, the positive target sampled from D_i, and {{r'}_{i}^{t}}^{neg}, the negative target. For a given task, {r'}_{i}^{t} is related to the reward function that reflects the task information, while {{r'}_{i}^{t}}^{neg} introduces bias. Eq. (7) encourages the task characteristic extractor to assign high attention weights to task characteristic transitions for achieving accurate reward estimation. Eq. (8) enforces the task characteristic extractor to assign low attention weights to general transitions for approximating the negative target. Due to the differences in task characteristic information represented by these two types of transitions, incorrect assignment of attention weights disrupts the optimization, preventing the proper minimization of both Eq. (7) and Eq. (8). Some prior studies [1, 3] have demonstrated the empirical effectiveness of optimizing in both positive and negative perspectives, providing justification for our method. Moreover, as shown in the ablation results in Figure 6, Figure 7, and Figure 8, L_{TCE}^{pos} plays a primary role, while L_{TCE}^{neg} serves as a constraint to assist the optimization process. Notably, as stated in Lines 299-301, Section 4.1.1, L_{TCE}^{neg} is not used to optimize the context-based reward estimator.\\n\\n**Regarding W4:**\\n\\nWe conduct these experiments to illustrate why the context-based reward estimator is chosen to measure how effectively the context captures task characteristic information. Specifically, neither the individual use of the context-based dynamic or reward estimator nor their combined use yields significant performance improvement. As shown in Figure 9 and Appendix G.3, setting an appropriate negative target is crucial. However, constructing a negative target for each next state is challenging due to its high dimensionality, requiring a multidimensional Gaussian distribution of noise to generate corresponding {{s'}_{i}^{t+1}}^{neg}. Our additional results show that dynamic estimations improve performance slightly in the Half-Cheetah-Vel environment when combined with reward estimations, while TCMRL with dynamic estimations alone shows the lowest performance in most environments. In contrast, constructing {{r'}_{i}^{t}}^{neg} is simpler due to its lower dimensionality and does not require such complexity. There are other methods for constructing negative targets and measuring context effectiveness, which we plan to explore in future work.\\n\\n[1] CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation. 2022.\\n\\n[2] Weakly Supervised Action Localization by Sparse Temporal Pooling Network. 2018.\\n\\n[3] Self-Erasing Network for Integral Object Attention. 2018.\\n\\n[4] Salient Object Detection via Integrity Learning. 2023.\"}", "{\"metareview\": \"**summary**\\n\\nThe paper introduces TCMRL, a framework designed to enhance generalization and adaptability in context-based offline meta-RL methods. TCMRL incorporates a task characteristic extractor, which uses attention mechanisms to identify key transitions within tasks, and a task contrastive loss, which employs contrastive learning to differentiate tasks by analyzing interrelations among trajectory subsequences. Together, these components create comprehensive context representations that capture both task-specific and distinguishing features, enabling rapid adaptation to unseen tasks. The authors also condition a reward network on weighted transition encodings and penalizes reversed context weights to further refine task understanding. Experiments across benchmark datasets demonstrate that TCMRL outperforms existing approaches.\\n\\n**strengths**\\n\\nThe paper is well-written with a clear and logical structure, making it easy to follow, and it demonstrates consistent improvements over baseline methods across a diverse set of experiments.\\n\\n\\n**weaknesses**\\n\\n* The paper lacks a clear technical rationale for its design choices, making it unclear why these specific components achieve the desired behavior of the context encoder.\\n* There is no theoretical analysis or strong empirical justification to demonstrate that the proposed objective function guarantees generalization\\n* The approach combines modified versions of existing loss functions and established techniques, limiting the novelty of the contribution.\\n\\n**decision**\\n\\nIt is difficult to say that the contributions of this work are very significant due to several limitations listed in\\u00a0weaknesses. I recommend that the authors address these concerns and consider resubmitting to another venue.\", \"additional_comments_on_reviewer_discussion\": \"I don\\u2019t think that the authors made a valid argument to address the reviewer's concerns about novelty.\"}", "{\"title\": \"Response to question (4) - (5)\", \"comment\": \"Thank you for taking the time to review our paper and for providing valuable feedback.\\n\\n``Q4. How do you determine the proportions of the various losses in the optimization process? I believe that the hyperparameters setting these ratios significantly impact the method\\u2019s performance.``\\n\\nWe use the popular grid search to determine the hyperparameters. As shown in Figures 6, 7 and 8, the perspective of positive reward estimation plays a major role, and therefore we assign it a higher proportion. For simplicity, we generally assign equal proportions to the perspectives of sparsity and negative reward estimation, which are treated as constraint terms. Additionally, we use the same hyperparameter settings in most task sets within the Meta-World ML1 environment.\\n\\n\\n``Q5. In the ablation study, as shown in Figures 5 and 6, I noticed that, in experiments like reacher-v2, removing an individual component within TCE results in greater performance loss than fully removing TCE. How would you explain this phenomenon?``\\n\\nThanks for your detailed review. As stated in Line 475, Section 5.4, the results in Figure 6 are obtained without using the task contrastive loss, so they cannot be directly compared with the \\\"w/o TCE\\\" results in Figure 5. We acknowledge that there is ambiguity in the legend of Figure 6, and we will clarify it in the subsequent version.\\n\\nAdditionally, as discussed in Lines 480-525, Section 5.4, during the optimization of our task characteristic extractor, the perspective of positive reward estimation plays a major role, while the perspectives of sparsity in attention weights and negative reward estimation serve as constraints. Consequently, when optimization is performed without the perspective of positive reward estimation, it may lead to a significant performance decline.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you very much, we deeply appreciate the effort and time you've dedicated to providing us with your valuable feedback!\"}", "{\"title\": \"Response to the weakness about novelty\", \"comment\": \"``Weakness about novelty. Overall, the proposed method consists of several modified versions of previous loss functions and is also combined with existing contrastive learning technique. The technical novelty is not strong and there is no theoretical analysis on why the proposed objective function can guarantee generalisation in a meta learning setting.``\\n\\nThank you for taking the time to review our paper and for providing valuable feedback. Although the attention mechanism, loss functions, and the contrastive learning technique we utilize have their roots in other domains, we believe that TCMRL is novel to the context-based offline meta-RL setting for the following reasons:\\n\\n**(1) We propose a task characteristic extractor that builds upon the coarse contexts obtained through averaging in existing context-based offline meta-RL methods. It identifies and emphasizes transitions that represent task characteristics. We introduce the context-based reward estimator as a way to measure and to optimize the task characteristic extractor from the perspectives of positive and negative reward estimation and sparsity in attention weights.**\\n\\n**(2) We discover overlooked interrelations among transitions from trajectory subsequences. These interrelations are modeled as the mutual information among transitions within each subtrajectory, and the lower bound of the mutual information is approximated using the InfoNCE loss function.**\\n\\n**(3) We construct a novel combination of (1) and (2) and specifically adapt it for the context-based offline meta-RL setting. TCMRL captures both task characteristic information and task contrastive information to improve the generalization of contexts.**\"}", "{\"title\": \"Response to weakness (2) - (4)\", \"comment\": \"Thank you for taking the time to review our paper and for providing valuable feedback.\\n\\n``W2. It is unclear to me why Eq. (7) and Eq. (8) can lead to learning a context encoder such that the task characteristic extractor q can capture task unique transitions. The neural network is probably able to capture task conditioned reward with a task-level context without learning relations in terms of tasks transitions. In my opinion, the network does not promote the correct importance score of c_i. It probably only makes c_i and c_i^neg different and that is enough to learn a conditional reward function under Eq. (7) and (8).``\\n\\nIn Eq. (7), we use the context-based reward estimator as the metric to ensure that our task characteristic extractor can still effectively express task information, even when it assigns high attention weights to only a few transitions. This helps guide the task characteristic extractor to identify and emphasize transitions that represent the task characteristics (see Lines 268-269, Section 4.1.1).\\n\\nIn Eq. (8), we assign high attention weights to the remaining transitions and induce wrong reward estimations in the context-based reward estimator. This further mitigates the impact of redundant information from these transitions and indirectly emphasizes task unique transitions (see Lines 283-287, Section 4.1.1).\\n\\nIn summary, in Eq. (7), if less important transitions are assigned high attention weights, accurate reward estimations become difficult. Meanwhile, in Eq. (8), if important transitions are treated as part of the remaining transitions, the reward estimations are less likely to converge toward wrong targets. Together, these effects enable the optimization of our task characteristic extractor.\\n\\n``W3. Are r and r_reverse in Eq. (7) and Eq. (8) the same neural network with same parameters?``\\n\\nYes, your understanding is right. Both \\\\hat{r} and {\\\\hat{r}}_{reverse} are computed by a same neural network with the same parameters.\\n\\n``W4. It seems that Eq. (7) and Eq. (8) only capture the task shift in terms of reward function while the transition dynamics is ignored (no loss function in terms of next state prediction). Can authors please explain the reason?``\\n\\nWe believe there may be a misunderstanding. We have considered constructing the context-based dynamic estimator in the same way. However, since both the context-based dynamic estimator and our context-based reward estimator serve as ways to measure how well the context captures task characteristic information (Lines 249-250, Section 4.1.1), we have chosen to introduce only the context-based reward estimator based on our experimental results. Our experimental results show that adding the context-based dynamic estimator does not lead to significant performance improvement but instead increases computational cost.\\n\\nTo further validate this observation, we have conducted additional experiments in the Half-Cheetah-Vel, Hopper-Rand-Params, and Walker-Rand-Params environments, as well as three task sets from Meta-World ML1 (Button-Press-Topdown, Dial-Turn, and Reach). \\n\\nThe experimental results show that the performance results of TCMRL with reward estimations, TCMRL with dynamic estimations, and TCMRL with both reward and dynamic estimations are similar, with only slight variations that can be attributed to error fluctuations. Notably, to construct the learning objective for negative dynamic estimation, we sample r^{noise} in the same way from a multidimensional Gaussian distribution of noise to build {{s'}_{i}^{t+1}}^{neg}. The results are shown as follows:\\n| | TCMRL with reward estimations (ours) | TCMRL with dynamic estimations | TCMRL with reward and dynamic estimations |\\n| -------------------- | ------------------------------------ | ------------------------------ | ----------------------------------------- |\\n| Half-Cheetah-Vel | -79.7\\u00b111.3 | -81.9\\u00b112.6 | **-79.1\\u00b114.3** |\\n| Hopper-Rand-Params | **368.62\\u00b110.37** | 358.33\\u00b115.43 | 366.95\\u00b112.50 |\\n| Walker-Rand-Params | **354.97\\u00b119.72** | 347.40\\u00b123.18 | 351.06\\u00b117.01 |\\n| Button-Press-Topdown | **0.81\\u00b10.12** | 0.80\\u00b10.11 | 0.81\\u00b10.11 |\\n| Dial-Turn | **0.98\\u00b10.01** | 0.98\\u00b10.01 | 0.98\\u00b10.01 |\\n| Reach | **0.92\\u00b10.03** | 0.90\\u00b10.03 | 0.92\\u00b10.02 |\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"Thank you for taking the time to review our paper and for providing valuable feedback.\\n\\n``W1. Some results are reported as normalized scores, e.g. Table 1. However, there is no explanation for how normalization is processed.``\\n\\nAs shown in Figure 4, the performance of TCMRL and all baselines has largely converged within the training steps we set, under the conditions defined in Line 430, Section 5.2.\\nFor a specific task, we compute the average reward obtained from the steps after performance convergence. We then divide this average reward by its corresponding average return of the offline dataset to calculate the normalized score. The average returns of the offline datasets across all experimental environments are presented in Appendix H.\\n\\n``W2. Although context shift is highlighted as one of the primary issues that TCMRL aims to solve, there is no in-depth analysis of how TCMRL reduces context shift compared to other methods, and potential limitations.``\\n\\nTCMRL is supported by both empirical evidence and existing theoretical foundations. We have introduced the concept of context shift (see Lines 39-45, Section 1), the limitations of existing context-based offline meta-RL methods (FOCAL (Li et al., 2021b), FOCAL++ (Li et al., 2021a), CORRO (Yuan & Lu, 2022), IDAQ (Wang et al., 2023) and CSRO (Gao et al., 2023)) (see Lines 136-144, Section 2), the motivation of the task characteristic extractor (see Lines 207-217, Section 4.1.1) and the motivation of the task contrastive loss (Lines 307-317, Section 4.1.2).\\n\\nBuilding on these statements already presented in our paper, we will further analyze TCMRL. An important reason why existing methods are significantly impacted by context shift is that they do not consider constructing comprehensive internal relationships among tasks from the offline data of meta-training tasks. In this context, TCMRL aims to reduce the Impact of context shift by learning how to effectively extract task information from offline data and constructing these internal relationships. These relationships are directly reflected in the contexts of the same task and the contexts of different tasks. Our goal is to capture task characteristic information and task contrastive information to generate contexts that contain both consistency within the same task and distinctness among different tasks. This allows us to construct comprehensive internal relationships among tasks, enabling the knowledge from offline datasets of meta-training tasks to be extended to unseen target tasks based on these relationships.\\n\\n**Potential limitation:**\\n\\nAs demonstrated in Appendix G.4, when constructing the task contrastive loss to capture task contrastive information, we use subtrajectories with a length determined by a hyperparameter K. It means that we may require additional time to tune this hyperparameter when facing a new environment. In future work, we plan to explore more effective ways to design the task contrastive loss to mitigate this limitation.\"}", "{\"title\": \"Response to question (1) - (3)\", \"comment\": \"Thank you for taking the time to review our paper and for providing valuable feedback.\\n\\n``Q1. As illustrated in the weaknesses, why is the lack of generalization attributed to the absence of task characteristic information and task contrastive information? Could you explain this in more detail, and how do you perceive the relationship between these two types of information?``\\n\\nOur method is based on both empirical evidence and existing theoretical foundations. We have introduced the concept of context shift (see Lines 39-45, Section 1), the limitations of existing context-based offline meta-RL methods (FOCAL (Li et al., 2021b), FOCAL++ (Li et al., 2021a), CORRO (Yuan & Lu, 2022), IDAQ (Wang et al., 2023) and CSRO (Gao et al., 2023)) (see Lines 136-144, Section 2), the motivation of the task characteristic extractor (see Lines 207-217, Section 4.1.1) and the motivation of the task contrastive loss (Lines 307-317, Section 4.1.2).\\n\\nBased on these previous statements, we will further analyze TCMRL in detail. When learning from offline datasets of meta-training tasks, existing context-based offline meta-RL methods fail to generate contexts that exhibit consistency within the same task and distinctness among different tasks, reflecting comprehensive internal relationships among tasks. We experimentally demonstrate that these properties correspond to task characteristic information and task contrastive information, respectively. TCMRL aims to capture both of them to generate generalizable contexts with exhaustive task information and construct comprehensive internal relationships among tasks. These relationships enable the extension of knowledge from offline datasets of meta-training tasks to unseen target tasks, facilitating efficient and effective adaptation to unseen target tasks.\\n\\nThese two types of information are complementary components of task information, and their roles are described in Lines 49-50, Section 1 and Lines 81-82, Section 1. By combining them, we can obtain complete task information, which allows us to build comprehensive internal relationships among tasks and generate generable contexts.\\n\\n``Q2. When extracting task characteristic information, why not consider using a well-established architecture like the Transformer? Given that Transformers leverage self-attention mechanisms to extract key information from sequences and create unified representations while capturing internal relationships within sequences, it seems like a viable option.``\\n\\nThanks for your comment. Actually, we have already considered using structures such as the Transformer before. However, as mentioned in Line 1397, Section G.9, compared to our attention mechanism based on MLP, they result in high and fluctuating GPU memory usage, as well as increased time costs, without providing significant performance improvements.\\n\\nTo further validate this observation, we have conducted additional experiments with the Transformer in the Half-Cheetah-Vel, Hopper-Rand-Params, and Walker-Rand-Params environments, as well as three task sets from Meta-World ML1 (Button-Press-Topdown, Dial-Turn, and Reach). Notably, when using Transformer, our L_{TCE}^{spa} (Eq. (6)) is not utilized. The results are shown as follows:\\n\\n| | TCMRL | TCMRL with Transformer |\\n| -------------------- | ---------------- | ---------------------- |\\n| Half-Cheetah-Vel | **-79.7\\u00b111.3** | -95.53\\u00b117.4 |\\n| Hopper-Rand-Params | **368.62\\u00b110.37** | 337.40\\u00b121.05 |\\n| Walker-Rand-Params | **354.97\\u00b119.72** | 335.29\\u00b127.16 |\\n| Button-Press-Topdown | **0.81\\u00b10.12** | 0.56\\u00b10.08 |\\n| Dial-Turn | **0.98\\u00b10.01** | 0.84\\u00b10.07 |\\n| Reach | **0.92\\u00b10.03** | 0.88\\u00b10.05 |\\n\\n``Q3. Could you provide the rationale for designing the negative reward estimation as you did? What motivated this specific design?``\\n\\nAs mentioned in Lines 283-287, Section 4.1.1, we empirically validate that for optimizing our task characteristic extractor solely from the perspective of positive reward estimation, which focuses on identifying and emphasizing transitions that are task characteristics, its effect is limited. For this reason, we propose the novel idea of negative reward estimation. By inducing wrong estimations through c_{i}^{neg}, which primarily captures the redundant information of less important transitions, we indirectly optimize the task characteristic extractor. \\n\\nIf important transitions are contained in the remaining transitions, they tend to guide the estimation towards the correct target {r\\u2019}_{i}^{t} instead of the wrong target {{r'}_{i}^{t}}^{neg}.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to the weakness about fundamental technical reason and weakness (1)\", \"comment\": \"Thank you for taking the time to review our paper and for providing valuable feedback.\\n\\n``Weakness about fundamental technical reason. The proposed method in this work consists of many components around the context encoder training. However, It is unclear to me what is the fundamental technical reason behind these kinds of design and why these specific designs can achieve desired behaviour of the context encoder. There are many explanations in the method part in section 4 but they are not well structured in logic and look very lengthy.``\\n\\nWe believe there may be some misunderstandings. TCMRL is proposed based on both empirical evidence and existing theoretical foundations and built upon a clear high-level motivation. We have introduced the motivation of the task characteristic extractor (see Lines 207-217, Section 4.1.1) and the motivation of the task contrastive loss (Lines 307-317, Section 4.1.2).\\n\\nIn addition to these statements mentioned in our paper, we will further explain the overall motivation behind TCMRL. In cases where existing context-based offline meta-RL methods (FOCAL (Li et al., 2021b), FOCAL++ (Li et al., 2021a), CORRO (Yuan & Lu, 2022), IDAQ (Wang et al., 2023) and CSRO (Gao et al., 2023)) do not consider constructing comprehensive internal relationships among tasks from offline data of meta-training tasks, TCMRL aims to reduce the Impact of context shift (see Lines 39~45, Section 1) by leveraging these relationships. These relationships are directly reflected in the contexts of the same task and the contexts of different tasks. Our goal is to capture task characteristic information and task contrastive information to generate contexts that exhibit both consistency within the same task and distinctness among different tasks. This allows us to construct comprehensive internal relationships among tasks, enabling the knowledge from offline datasets of meta-training tasks to be extended to unseen target tasks based on these relationships.\\n\\n\\n\\n``W1. Line 238: \\u201cas only a few key transitions within the trajectory provide the main task characteristic information\\u2026\\u201d This is to say many other transitions do not distinguish tasks. I have concern over this statement as this is only probably correct when the tasks have some property like a hierarchical structure. In general when the dynamics of a target task has a consistent shift on the entire state space, such sparsity prior would not be beneficial.``\\n\\nWe do not aim to use task characteristic information to distinguish different tasks. As mentioned in Lines 49-50, Section 1, task characteristic information reflects the consistency of contexts within the same task. This statement is supported by the theoretical foundations provided in two references (Arjona-Medina et al., 2019; Faccio et al., 2022) cited in Line 239. Our goal is to extract stable contexts for a particular task from its different trajectories by identifying and emphasizing transitions that are task characteristics, rather than being affected by the redundant information from less important transitions. The contexts that encompass comprehensive task information will directly guide the context-based policy to complete the corresponding task. Therefore, identifying and emphasizing transitions that are task characteristics within different trajectories of the same task is crucial.\\n\\nMoreover, as mentioned in Lines 81-82, we aim to distinguish tasks from one another by capturing task contrastive information. We design a task contrastive loss that discovers overlooked interrelations among transitions from trajectory subsequences through contrastive learning for capturing exhaustive task contrastive information (see Section 4.1.2).\\n\\nWe acknowledge that situations where \\\"the dynamics of a target task has a consistent shift on the entire state space\\\" do exist. Currently, neither the existing context-based offline meta-RL methods nor TCMRL can adequately address such challenging scenarios. In fact, the performance we report is not based on a single unseen target task but represents the average performance across a series of unseen target tasks. Some tasks in this series may fall into this category and exhibit low performance. We consider this limitation as one of the directions for future research.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer QaPj\\n\\nWe would like to know if our response has addressed your concerns and questions. If you have any further concerns or suggestions for the paper or our rebuttal, please let us know. We would be happy to engage in further discussion and manuscript improvement. Thank you again for the time and effort you dedicated to reviewing this work.\"}", "{\"summary\": \"The paper addresses the limitations in generalization and adaptation of existing context-based offline meta-reinforcement learning (meta-RL) methods. The proposed framework, TCMRL, enhances context generation by incorporating both task characteristic information, which identifies key transitions within tasks, and task contrastive information, which distinguishes tasks through interrelations in trajectory subsequences. This combined approach yields a comprehensive task understanding, improving adaptability to unseen tasks. Experiments confirm TCMRL\\u2019s advantage in generating generalizable contexts and effective adaptation over previous methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"\\u2022 Clarity: The paper is well-articulated, with a clear and complete structure that makes the methodology and findings accessible to readers. Additionally, the appendix provides extensive experimental details and analyses.\\n\\u2022 Significance: I believe the authors have targeted a valuable goal, namely addressing the challenge of context shift in context-based offline meta-reinforcement learning methods. The paper demonstrates consistent improvements over baseline methods across a wide range of experiments, showcasing the robustness of their approach in tackling this important issue.\", \"weaknesses\": \"Quality: The paper lacks a strong motivational foundation, particularly in explaining why task characteristic information and task contrastive information are expected to enhance context generalization. While the authors introduce a novel method based on these two types of information, the construction of the approach appears somewhat arbitrary, relying on intuition rather than solid theoretical underpinnings. An improved presentation could include theoretical justifications or empirical evidence demonstrating that capturing these specific forms of task information is indeed crucial for generalization.\\n\\u2022 Originality: Although the technical implementation is undoubtedly innovative in its details, the underlying concepts are relatively familiar within the field. Techniques such as implicit attention mechanisms, context encoding, and task-based contrastive learning have been explored previously, and this paper can be seen as a new combination of these existing ideas.\", \"questions\": \"1.\\tAs illustrated in the weaknesses, why is the lack of generalization attributed to the absence of task characteristic information and task contrastive information? Could you explain this in more detail, and how do you perceive the relationship between these two types of information?\\n2.\\tWhen extracting task characteristic information, why not consider using a well-established architecture like the Transformer? Given that Transformers leverage self-attention mechanisms to extract key information from sequences and create unified representations while capturing internal relationships within sequences, it seems like a viable option.\\n3.\\tCould you provide the rationale for designing the negative reward estimation as you did? What motivated this specific design?\\n4.\\tHow do you determine the proportions of the various losses in the optimization process? I believe that the hyperparameters setting these ratios significantly impact the method\\u2019s performance.\\n5.\\tIn the ablation study, as shown in Figures 5 and 6, I noticed that, in experiments like reacher-v2, removing an individual component within TCE results in greater performance loss than fully removing TCE. How would you explain this phenomenon?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the explanations, which address my concerns. I'm happy to keep the current rating.\"}" ] }
5GZuEZDmUE
Spectral Truncation Kernels: Noncommutativity in $C^*$-algebraic Kernel Machines
[ "Yuka Hashimoto", "Ayoub Hafid", "Masahiro Ikeda", "Hachem Kadri" ]
$C^*$-algebra-valued kernels could pave the way for the next generation of kernel machines. To further our fundamental understanding of learning with $C^*$-algebraic kernels, we propose a new class of positive definite kernels based on the spectral truncation. We focus on kernels whose inputs and outputs are vectors or functions and generalize typical kernels by introducing the noncommutativity of the products appearing in the kernels. The noncommutativity induces interactions along the data function domain. We show that it is a governing factor leading to performance enhancement: we can balance the representation power and the model complexity. We also propose a deep learning perspective to increase the representation capacity of spectral truncation kernels. The flexibility of the proposed class of kernels allows us to go beyond previous separable and commutative kernels, addressing two of the foremost issues regarding learning in vector-valued RKHSs, namely the choice of the kernel and the computational cost.
[ "kernel methods", "positive definite kernel", "spectral truncation" ]
Reject
https://openreview.net/pdf?id=5GZuEZDmUE
https://openreview.net/forum?id=5GZuEZDmUE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yxpBrkVgxF", "wbXxGF6Z55", "waQtQyZL1d", "wMvMzTWeXr", "tjzDbrU8yK", "pbP7ZWNCA8", "ohokV5VNnn", "o1kSKUm3l1", "nOzlWCJcZm", "moUhgxoA02", "kwe6WPeiJ5", "iJAMes5mSQ", "hrNN5KVTsd", "duaRq4gj5K", "dWuAtxXLlp", "a7IezdlCot", "ZkczZnLKtN", "XrzDFGtfdR", "UujAuVP2Dc", "UZFQBMsAK5", "NJ0ySv4RTQ", "LTbIjIPkpg", "IuuI0sxgr0", "EPZ6o7oJIm", "DOSMBlE3eG", "AndkrwcUP7", "8xhFj6CUxX", "8aljSLPD2f", "7BZT0UDs5B", "5j57Vk2Olb", "3iHOBLdIKU", "3AVzey4GIu" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732287804515, 1734230581996, 1732066699528, 1732767031857, 1730664249709, 1737523453828, 1732064510791, 1732241218730, 1732531696683, 1732750357062, 1732433533133, 1730733024962, 1732763452020, 1732700701509, 1732064838775, 1732690342144, 1730899438930, 1732070315370, 1730476741602, 1732068271154, 1733118023852, 1733113119203, 1732190984566, 1732069897528, 1732184617884, 1732065512606, 1732207255244, 1732530360692, 1732067030500, 1733121116961, 1732440443007, 1732343958998 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_jZ5N" ], [ "ICLR.cc/2025/Conference/Submission1463/Area_Chair_hadh" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_XMUi" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_jZ5N" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_33dL" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_XMUi" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_vkGt" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_jZ5N" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_33dL" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_jZ5N" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_XMUi" ], [ "ICLR.cc/2025/Conference/Submission1463/Reviewer_vkGt" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ], [ "ICLR.cc/2025/Conference/Submission1463/Authors" ] ], "structured_content_str": [ "{\"title\": \"Comment\", \"comment\": \"Thanks for your response. We acknowledge that kernel methods are better suited for situations with a small sample size compared to deep networks. The reference [1] is the deep kernel machine, which combines the deep network with the kernel method. So, in a sense, does it borrow the hierarchical structure of deep networks? In that case, would it tend to be more suitable for large-scale data? And, in the proposed method, the authors also extend the proposed method to the deep case. If yes, what are the benefits of the deep kernel machine? One can argue that an advantage of them over DNNs is its theoretical solidness. However, DNN also has theoretical support in the convergence and generalization, such as NTK.\"}", "{\"metareview\": \"This paper utilizes the tool of C-algebra to develop new methods of kernel machines, receiving certain recognition from the reviewers. However, the current paper suffers from obvious weaknesses that prevent me from recommending acceptance. To me specific, the experimental setup, consisting of a synthetic dataset and a simple dataset called MINIST, is too weak to be convincing.\", \"additional_comments_on_reviewer_discussion\": \"After author-reviewer discussions, the major concern remains unsolved:\\n\\nReviewer vkGt, who strongly suggests rejection, states that \\u201cI thank the authors for their response. My primary concern with this paper lies in the unconvincing nature of its applications. They give two nice but simple demonstrations of their algorithm: data regression and an MNIST image reconstruction. In the context of operator learning (mentioned in the Appendix), the authors present an additional experiment with the Burgers' equation. While this shows some applicability, it remains a relatively simple example. If the proposed techniques were more novel, I would say that it is ok that the applications are underdeveloped. But since the techniques fairly build on well-established vector-valued RKHSs, I think the lack of developing applications for them weakens the paper's strengths\\u201d.\\n\\nThe AC has indeed checked the paper and agrees that the reviewer is spot on.\"}", "{\"comment\": \"Thank you very much for your constructive comments. We addressed you comments and questions and revised the paper. We summarize the answers below.\\n\\n**[W1] Application of approximation techniques** \\nAlthough we obtain a Gram matrix $G$ with function-valued elements for the proposed kernel, we can obtain the $\\\\mathbb{C}^{N\\\\times N}$ Gram matrix by evaluating it at a certain point $z$.\\nThen, we can apply low-rank approximations such as the Nystrom method to $G(z)\\\\in\\\\mathbb{C}^{N\\\\times N}$.\\nIn this way, we can reduce the computational cost with respect to $N$ for the proposed kernel, too.\\nIn the last paragraph of Section 5, we tried to insist that we can apply the Nystrom method to the proposed kernel, but we are sorry that the sentences were not clear.\\nWe revised the last paragraph of Section 5 to clarify the above point.\\n\\nThe main goal of this paper is to propose a new class of function-valued kernels based on spectral truncation and investigate their fundamental properties.\\nThus, although we can apply some approximation methods to the proposed kernels, we focused more on the basic method without the approximation.\\nIn addition, we may develop approximation methods by taking advantages of the fact that the output of the kernels are functions.\\nHowever, more detaild observation about the approximation methods that is specific to the proposed kernel is future work.\\n\\n**[W2] Analysis of the deep model** \\nThe goal of Proposietion 6.1 is not providing general analysis for deep models, but providing a certain archtecture that is effective in the sense of the deep model.\\nFor the deep model, we can construct the model by parameterizing the learnable values, and specifying the model architecture is important. \\nThe result in Proposition 6.1 provides a certain architecture of the deep model that achieves the exponential growth of the representation power with respect to the number of layers.\\nThus, this result is useful in constructing a model with a theoretical guarantee.\\n\\n**[W3] Additional experiments with more complex, real-world datasets** \\nThe motivation of this paper is to propose kernels going beyond the separable and commutative kernels with low computational cost and to theoretically investigate their fundamental properties.\\nThis work is the first attempt to achieve the above goal, and the goal of the numerical experiments in this paper is to confirm these fundamental properties, and more detailed experimental investigation is beyond the scope of this paper.\\nPlease see the global comment at the top of this page for more details of the motivation of this paper.\"}", "{\"comment\": \"Thank you very much for your time to check the new version, and thank you for improving the score. If you have any additional comments, questions, and concerns, please let us know.\"}", "{\"summary\": \"This paper proposes a new class of positive definite kernels based on the spectral truncation. Detailed properties and examples have been discussed, and numerical results on both synthetic data and the MNIST dataset are presented.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors have introduced the basic properties of the proposed kernels and also investigated the generalization error. The presentation is clear.\", \"weaknesses\": \"Though the theoretical results are promising, two main questions remain unaddressed:\\n1. How can practical learning designs benefit from the new algebraic structures?\\n2. How can the development facilitate the kernel choices in practice? Section 6 seems to have some discussions on deep models, but a general development shall be data/task-dependent.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you very much for your constructive comments. We addressed you comments and questions and revised the paper. We summarize the answers below.\\n\\n**[Q1] Motivation, pros, and cons of the proposed method** \\nThe motivation of this paper is to propose kernels **going beyond the separable and commutative** (defined only with the pointwise calculation of functions or vectors) **kernels with low computational cost**.\\nIn the framework of vvRKHSs, transformable kernels and combinations of them with separable kernels have been proposed to go beyond separable and commutative kernels.\\nHowever, an important shortcoming of them is the significant computational cost.\\nTo resolve this issue, we propose a function-valued kernel combined with the framework of RKHMs.\\nSeparable and commutative kernels are two extreme cases regarding the dependencies between input and output variables.\\nSeparable kernels identify dependencies between input and output variables separately, and cannot reflect information of input variables properly to output variables. **For the separable kernels, the output is determined only by the global information of the input**. \\nOn the other hand, **commutative kernels only identify the pointwise (completely local) dependencies**.\\n**The proposed kernel fill a gap between separable and commutative kernels with lower computational cost** than existing operator-valued kernels.\\nIndeed, we show that the proposed kernels converge to the commutative kernels as the truncation parameter $n$ goes to infinity.\\nOn the other hand, if $n=1$, then the proposed kernels are separable kernels.\\n**If $n$ is small, the proposed kernel focuses more on global information of input functions.\\nIf $n$ is large, the proposed kernel focuses more on local information of input functions.**\\nThis feature is achieved by introducing the **noncommutativity** to typical commutative kernels by applying the theory of spectral truncation, and is described by the property of the Fejer kernel.\\nThe Fejer kernel goes to the delta function as $n$ goes to infinity.\\nPlease see Figure 2 in appendix B for more details about the Fejer kernel.\\n\\nAs we discussed in Section 8, one limitation of the proposed kernels is that if we generalize them to those for more general functions than functions on the torus, then the theoretical analysis becomes more complicated, and the results in this paper may not valid for the generalized kernels.\\nAlthough we can formally generalize the kernels, thorough theoretical analysis for the generalized kernels is future work.\\n\\nWe revised the introduction and Section 3 to emphasize more on this motivation. \\nIn addition, we added Sections A and B in the appendix to explain it in more detail.\\n\\n**[Q2] Issue with commutativity and the benefit of noncommutativity** \\nCommutative kernels only identify completely local relationship between the input function and the output function.\\nThe value of the output function at $z$ is determined only with the value of the input function at $z$.\\nFor example, if we have a time-series input $[x_1,\\\\ldots,x_d]\\\\in\\\\mathcal{A}^d$ as explanatory variables and try to obtain an output function as a response variable, the values of $x_1(z),\\\\ldots,x_d(z)$ at time $z$ is strongly related to the value of the output at time $z$, but may also related to $y(z+t)$ for $t\\\\in [-T,T]$ for a small number $T$.\\nIn this case, the commutative kernels are not suitable for extracting the relationship between $x_1(z),\\\\ldots,x_d(z)$ and the values of the output around $z$, not only at $z$.\\nWe documented detailed explanations of commutative kernels in Appendix A.\\n\\nApplying the theory of spectral truncation, we can induce the noncommutativity from commutative kernels and extract the relationship between $x_1(z),\\\\ldots,x_d(z)$ and the values of the output around $z$.\\nMoreover, we can control how much we will focus on local information by changing the value of $n$.\\nIf $n$ is large, then we can focus more on local information.\"}", "{\"comment\": \"Thank you very much for the clarification of the question.\\n\\nWe would like to emphasize that the proposed kernels are general and flexible. Especially, we have the following two points:\\n\\n- The proposed kernels are composed of scalar-valued kernels $\\\\tilde{k}\\\\_{i,j}$ and $\\\\tilde{k}$ in Definition 3.2. We can choose any kernel for $\\\\tilde{k}\\\\_{i,j}$ and $\\\\tilde{k}$, and the properties of the proposed kernels depend on that choice. If we choose a kernel with a small number of parameters for $\\\\tilde{k}\\\\_{i,j}$ and $\\\\tilde{k}$, then the choice of kernels is restricted, but if we choose a kernel with a large number of parameters (such as a weighted sum of multiple kernels), then by optimizing the parameters, we can obtain a better kernel for given data or tasks. \\n- For function-valued kernels, in the same manner as the scalar-valued kernels, the weighted sum of positive definite kernels and product of positive definite kernels are also a positive definite kernel. Thus, we can consider multiple proposed kernels and combine them with weight parameters. In that case, we can also optimize the parameters to obtain a better kernel for given data or tasks.\"}", "{\"comment\": \"Thank you very much for your constructive comments and questions in your review, and thank you for improving the score. Your comments helped us to improve our paper. If you have any additional comments, questions, and concerns, please let us know.\"}", "{\"comment\": \"Dear Reviewer XMUi,\\n\\nWe updated the paper by adding the above points regarding the choice of kernels in Remark B.2 (colored in blue). We would appreciate if you could let us know whether the above answer addressed your question or not. Thank you for your time.\"}", "{\"title\": \"Comment\", \"comment\": \"Thanks for your response. It deepened my understanding of the motivation for this work.\"}", "{\"summary\": \"This paper proposes a new class of C*-algebra-valued positive definite kernels called spectral truncation kernels for vvRKHS. The noncommutativity, controlled by a truncation parameter n, allows for capturing interactions along the data function domain. The paper argues this enables a balance between representation power and model complexity, potentially leading to improved performance. A generalization bound is derived, highlighting the role of n in this tradeoff.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces an approach to kernel design by leveraging the mathematical framework of C*-algebras and RKHM, offering a potentially powerful way to model complex data relationships. The theoretical analysis of generalization bounds provides valuable insights into the trade-off between representation power and model complexity, guided by the kernel's truncation parameter.\", \"weaknesses\": \"1. While the authors claim a computational advantage over vector-valued RKHSs (vvRKHSs) due to the linear dependency on output dimension m compared to cubic dependency in vvRKHSs, this advantage is not clearly demonstrated. The computational cost analysis lacks a direct comparison with vvRKHSs employing appropriate approximation techniques. For instance, the use of Nystr\\u00f6m methods or random Fourier features could significantly reduce the computational burden of vvRKHSs, potentially negating the claimed advantage of spectral truncation kernels.\\n\\n2. The deep model extension, while promising, lacks theoretical grounding. The analysis of representation power growth is based on a very specific construction and doesn't provide general insights into the behavior of deep networks with spectral truncation kernels. \\n\\n3. The experimental results, while suggestive, are not compelling enough to validate the claimed advantages. The experiments are limited to synthetic data and a simplified MNIST task. More complex, real-world datasets with function-valued outputs are needed to assess the practical performance and demonstrate a clear advantage over existing methods.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The current manuscript has addressed my concerns. I have updated my score to reflect the improvement.\"}", "{\"comment\": \"Dear Reviewer vkGt,\\n\\nWe just want to let you know that since the deadline of revising the paper is approaching, we updated our paper by adding additional experimental results about operator learning in Appendix D.1 (colored in blue). We think this modification is related to your comments about the practical applications. Thank you again for your time and valuable comments.\"}", "{\"comment\": \"**[Q3] Problems that could in principle be tackled by the proposed method**\\nAs you pointed out, considering the case where inputs and outputs are functions is important for the proposed kernel.\\nThe application to image data is just an example, and there are more applications, which involves functions.\\nAlthough this work is the first attempt to applying the noncommutativity to go beyond separable and commutative kernels with low computational cost, it opens up these applications.\\nWe list two examples below.\\n\\n1. **Time-series data analysis** \\nWe can regard a time-series as a function on a time space.\\nIn many cases, a state at a certain time $z$ is influenced strongly by another state at the same time $z$, but also by the state around the time $z$.\\nSince commutative kernels focus only on local information, we cannot describe these two states with commutative kernels.\\nOn the other hand, since separable kernels focus only on global information, we cannot describe the relationship of these two states at each time $z$.\\nBy applying the proposed kernel, we can extract global information, but also can focus on local information.\\n\\n2. **Operator learning** \\nIn the framework of operator learning, we obtain a solution of a partial differential equation as an output from an input function (such as initial condition or parameter of the equation).\\nThus, we construct a model where both of the input and output are functions.\\nApplying kernel methods to operator learning has been proposed [1].\\nWe can construct the model by solving a kernel ridge regression task.\\nAnother well-known operator learning method is neural operator.\\nIn the framework of neural operators, we apply integral operators to extract global information and apply local linear operators and local activation functions to extract local information.\\nThe proposed kernel enables us to do similar procedures for the operator learning with kernels.\\nBy considering the product of multiple proposed kernels with different values of $n$ or deep model with the proposed kernels with different values of $n$, we can extract both global and local information; we can extract global information using the kernel with small $n$ and extract local information using the one with large $n$ in the model.\\n\\nWe mentioned about the application in Section 8 and added Appendix D for explaining above examples.\\n\\n[1] Pau Batlle, Matthieu Darcy, Bamdad Hosseini, and Houman Owhadi, \\\"Kernel methods are competitive for operator learning\\\", Journal of Computational Physics, 496(1):112549, 2024.\\n\\n**[Q4] Comparison with other function-valued regression methods with very large data** \\nThe motivation of this paper is to propose kernels going beyond the separable and commutative kernels with low computational cost and to theoretically investigate their fundamental properties.\\nThe goal of the numerical experiments in this paper is to confirm these fundamental properties, and more detailed experimental investigation is beyond the scope of this paper.\\nIn addition, an advantage of kernel methods is that they are valid even with a limited number of samples.\\nThus, we focused on the case of small numbers of samples in the experiments.\\n\\n**[W1] The matrix-valued kernel used in the experiment** \\nIn the experiment in Subsection 7.1, we compared the proposed kernel with an existing typical nonseparable kernel (combination of a transformable kernel and separable kernel) proposed by Lim et al. (2015).\\nAs we stated in Appendix A, the transformable kernels are described using an integral operator, and the matrix valued kernel used in Subsection 7.1 is regarded as a proper discretization of the operator-valued kernel.\\nWe think it is natural to use this operator-valued kernel to functional data, and it is suitable for the comparison with the proposed kernels.\\nWe can see that the proposed kernels outperform the above operator-valued kernel and the computational cost for the proposed kernel is also lower than the operator-valued kernel, which shows the advantages of the proposed kernel over typical nonseparable and noncommutative operator-valued kernels.\"}", "{\"comment\": \"Dear Reviewer 33dL,\\n\\nThank you again for your comments. Regarding [W3], our main goal is to propose kernels going beyond the separable and commutative kernels with low computational cost and to theoretically investigate their fundamental properties.\\nHowever, we agree that more experimental results are needed for showing the availability of the proposed kernels to pratical applications. For this purpose, we conducted an additional experiment.\\n\\nOne potential application of the proposed kernel is the application to operator learning. \\nIn the framework of operator learning, we obtain a solution of a partial differential equation as an output from an input function (such as initial condition or parameter of the equation). Thus, we construct a model where both of the input and output are functions. Applying kernel methods to operator learning has been proposed [1]. We can construct the model by solving a kernel ridge regression task. Please see Appendix D for more details. \\n\\nWe applied the proposed kernel to operator learning and obtained a higher performance than the existing method proposed in [1]. Please see Appendix D.1 (colored in blue) for more details.\\n\\nAlthough experiments for more complex cases are future work, we believe this result adresses your point [W3], and shows the possibility of the proposed kernels for futher applications.\\nIf you have any additional comments, questions, and concerns, please let us know.\\n\\n[1] Pau Batlle, Matthieu Darcy, Bamdad Hosseini, and Houman Owhadi, \\\"Kernel methods are competitive for operator learning\\\", Journal of Computational Physics, 496(1):112549, 2024.\"}", "{\"summary\": \"This paper explores the recent subfield of positive definite kernels with values in a C*-algebra and RKHM (the correponding \\\"RKHS\\\" theory) . The whole work is motivated by going beyond the separable kernel widely used in vector-valued RKHS based on operator-valued kernels and benefit from with a better compute time when applying kernel \\u201cridge\\u201d regression. The main interest of working with a C*-algebra is that it comes with a norm, a product and an involution, unifying operators and functions. In particular, the paper focuses on the C*-algebra of continuous functions and the case where inputs as well are elements of this C* algebra. The paper is illustrated with the example of continuous functions on the 1D torus. The authors propose a novel function-valued kernel, spectral truncation kernel, relying on the approximation of the multiplication operator with respect to x (defined in L2(T)) by leveraging a truncated spectral decomposition. The dimension of the truncated basis encodes a trade-off between the representation power and the model complexity. The resulting kernel also benefits from the noncommutativity of the approximated product and can be shown to converge. Applied on Kernel Ridge Regression in RKHM, this new kernel leads to a reduction of the complexity in time. It also comes with a generalization bound which is a direct instantiation of the result proven by Hashimoto et al. (2023). A deep architecture based on product (and not composition) of different kernel-based functions in RKHM is also presented. Experimental results study the behaviour of the approach with respect to the truncation parameter on a toy dataset. An additional result on an inpainted image recovery problem built on MNIST data is also briefly presented.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is following an original path in the kernel theory, exploring new schemes of kernels with values in a C* algebra. The work is certainly promising and of great interest for the kernel community. based on solid mathematical work, it opens a new way to tackle vector-valued or function-valued regression.\", \"Of special interest on the case of input and outputs that can be considered as functional (or vectors that can be seen as values of functions like images), the spectral truncation kernel allows for a drastically reduced computation cost in Kernel Ridge Regression while offering a great expressivity.\", \"It can be declined with various choices of ground kernels and its positive definiteness is studied\", \"Products of (function-valued)-functions based on those kernels provide a deep architecture.\"], \"weaknesses\": \"Even though very interesting the paper suffers from different flaws: some concern the presentation and can be considered as relatively minor, while the others are more fundamental.\", \"weaknesses_in_the_content\": \"*The motivation of the paper remains unclear: if the authors wish to go beyond the separable kernel in the general case of vector-valued functions, they should briefly discuss the limitations (which indeed exist) of the different operator-valued kernels such as the transformable kernels, the separable kernels or combination of them. If the motivation is to use RKHM theory in the case of function-valued functions then it is of paramount importance to highlight what cannot be done with operator-valued kernels devoted to functions with outputs in Hilbert spaces of functions. \\n*Once a family of kernels is defined, in machine learning, we are interested on the ability of the machine learning algorithms to indeed take benefit from this kernel and provide a good solution to the ML task. So what is missing in this paper, is a discussion and an empirical study to determine when using those kernels are interesting compared to previous methods: does the complexity of the model make the algorithm more greedy in training data. I do not think that the actual generalization error bound really help to tell us that in precise terms.\\nMy advice is thus to complete the paper with comparison with other (operator-valued) kernels and vv-RKHS. this has to be done in the case of the current toy dataset but also in known functional regression data sets.\\n* Applicability and relevance of the methodology: a central question that is still not enough answered at the end of the paper is the following: on which family of problems, these spectral truncation kernels are relevant ?\\nFor instance, the use of function-valued kernels for inputs and outputs which are vectors should be discussed. I think it is important here to clarify this: images are by definition a discretisation of continuous maps (intensity of pixel in finite resomution) and then they can be seen as a set of values of a function taken on different observation points. There is a great interest at considering the functions as continuous functions.\", \"i_do_not_think_it_is_always_meaningful_for_a_vector_to_be_encoded_as_a_function_of_its_coordinate_index\": \"can you comment on that ?\\n* Finally I do think that the paper would have sufficient content if it was restricted to function-valued functions. However, if images are tken as examples, then more convincing and complete comparison on image completion should be givne with more involved problems than weakly inpainted images. For the results given, do not say vv-RKHS comparions in the table say clearly the name of the matrix-valued kernel you used and try other kernels including more general operator-valued kernels for function-valued functions.\\n\\nWeaknesses in the presentation of the paper.\\nThe paper is in general not self content, too much straightforward in its statements and very not enough precise in the presentation. It seems to me that an important work of re-writing is necessary, even though it is quite obvious that some efforst ahve been made here.\", \"to_give_a_few_suggestions\": [\"rewrite the introduction with clear motivations and do not enter into partial details that cannot be understood at this stage (n < infty, n infty..)\", \"line 154 we jump into a comment about C(T) but previsously functions on the real torus were just an example. Now X = A ?\", \"please say it !Moreover we cannot understand the sentence \\\"however, by approximating ... by a Toeplitz matrix..;\\\", we do not know yet that Toeplitz matrix will be involved here\", \"before line 174, say a word about the works of Van Suijlekom and explain the role of the Fejer function. The reader has to consult this apper to undertstand the construction. It is important in what follows when talking about convergence.\", \"in general, do not give proposition under the form of the sole formula but write a sentence introducing the property and a comment on what the proposition brings.\", \"generalization bound : clearly state as in the appendix that this result comes directly from previous literature (Maurer, 2016... Hasimoto et al. 2023)\", \"It is crucial to introduce m when describing the observations at line 364.\", \"experiments :\"], \"what_do_you_want_to_bring_in_terms_of_emprirical_evidence\": \"please present the experiments as an answer to the questions/motivation of the beginning of the paper\\n\\n* after rebuttal: the paper has gained significant improvements and I am increasing my score consequently.\", \"questions\": [\"Overall I would be happy to increase the score of the paper (around 4) if answers are brought during the rebuttal.\", \"Please clarify the motivation and express the pros and cons with previous competing methods\", \"Give at least one example to make the reader understand the issue with commutativity and the benefit of non commutativity\", \"Identify as much as possible the family of problems that could in principle be tackled by this method\", \"Complete the toy experiments with a comparison with other function-valued regression methods\", \"for the existing experiments, are the same curves observed when dealing with a very large data regime ?\", \"Nearly all my questions have been answered in a satisfying way.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**To all the reviewers**\\nWe thank all the reviewers for their constructive comments and questions. We addressed them and revised the paper based on the comments. The revised parts are colored in red. The answer of each comment and question is posted as a comment for each reviewer.\\n\\nWe would like to emphasize that this work is the first attempt to incorporate the noncommutativity of the product combined with spectral truncation to go beyond typical kernel choices.\\nThe theoretical results open up a new direction of the applications of the proposed noncommutative kernels.\\nHere, we would like to summarize the advantages of the proposed kernels below:\\n\\n1. **Control of local and global dependencies** \\nFor kernel methods with function- or vector-valued outputs, one big challenge is the choice of kernels.\\nIn the existing framework of vvRKHSs, we need operator-valued kernels instead of scalar-valued kernels.\\nTypical choices are separable kernels and commutative kernels.\\nAlthough they are computationally efficient, they are two extreme cases in the sense of dependencies of input functions or vectors on output functions or vectors.\\n**Separable kernels focus only on global dependencies and commutative kernels focus only on local dependencies.\\nThe proposed kernels fill a gap between these two kernels by virtue of the noncommutativity.**\\nThe noncommutativity is parameterized by a natural number $n$, and we showed that this parameter controls how much amount of local dependencies are focused on.\\n\\n2. **Computational efficiency** \\nTo fill a gap between separable and commutative kernels, other operator-valued kernels in the framework of vvRKHSs have been proposed.\\nThe proposed kernel combined with the framework of RKHMs are **computationally more efficient than these existing kernels with vvRKHSs**.\\n\\n3. **Control of the representation power and the model complexity** \\nThe parameter $n$ in the proposed kernel also controls the representation power and the model complexity, which leads the performance enhancement.\\n\\nFor more details, please see the answers of the reviewers and Appendices A, B, and D of the revised version of the paper.\"}", "{\"summary\": \"In this paper, authors propose a set of positive definite spectral truncation kernels, which is a class of the $C^*$-Algebra-valued kernel. The definitions of the proposed kernels involve several concepts, including $C^*$-Algebra, function-valued kernel, spectral truncation, and the torus. The authors provide a theoretical analysis of the convergence and generalization. In addition, the authors introduce a noncommutativity and further illustrate its effectiveness with numerical results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well written and organized.\\n2. Theoretical analysis is detailed, enabling the proposed method to have solid theoretical support.\\n3. The authors consider the deep spectral truncation kernel, sharing the advantage of both deep model and spectral truncation kernel, to improve the representation power.\", \"weaknesses\": \"Please see the **Questions** section.\", \"questions\": \"1. In line 156, $z \\\\in \\\\mathbb{T}$ be the Fourier function. Why $z$ is a function? Based on the **Example 2.2**, $\\\\mathbb{T}$ is a set, and the elements of $\\\\mathbb{T}$ are real numbers. This puzzled me.\\n2. One of the main contributions of this paper is that the authors generalize typical kernels by introducing the noncommutativity of the products appearing in the kernels and showing their advantages. This is because $\\\\mathcal{R}_n (x)$ and $\\\\mathcal{R}_n (y)$, based on the spectral truncation, is noncommutative. However, the benefits of introducing noncommutativity in terms of convergence and generalization were not found. The effect of the noncommutativity is just illustrated in the experiment part. Or am I missing some details?\\n3. **Theory 3.4** gives the theoretical result of the convergence of proposed kernels. Does this mean that the proposed $k_n^{poly,q}(x, y)(z)$, $k_n^{prod,q}(x, y)(z)$, and $k_n^{seq,q}(x, y)(z)$ can approximate $k^{poly,q}(x, y)(z)$, $k^{prod,q}(x, y)(z)$, and $k^{seq,q}(x, y)(z)$, respectively? So is there any theoretical guidance for the selection of $n$, that is, how much $n$ can be well approximated?\\n4. From **Theory 4.1**, we can observe that the generalization bound is related to the trace of the kernel. How is this different from the previous theoretical results?\\n5. $n$ is the number of orthogonal bases. Therefore, the complexity of the model is larger if $n$ is larger, and the representation power of the model is better. This phenomenon also occurs in the general learning process or kernel function approach strategies. How is this different from them?\\n6. To obtain $c(z)$, computing $(G(z)+\\\\lambda I)^{-1}y(z)$. Poor scalability.\\n7. Some definitions are in the complex number domain, while some are in the real number domain. It is confusing for me. When to do it in the complex number domain and when to do it in the real number domain. It can include a pseudo-code to show the details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your constructive comments. We addressed you comments and questions and revised the paper. We summarize the answers below.\\n\\n**[Q1] Presentation regarding the Fourier function** \\nWe are sorry for the confusion.\\nWe replaced the sentence with \\\"Let $e_j$ be the Fourier function defined as $e_j(z)=\\\\mathrm{e}^{\\\\mathrm{i}jz}$ for $j\\\\in\\\\mathbb{Z}$\\\".\\n\\n**[Q2] Benefits of introducing noncommutativity** \\nAlthough benefits in the terms of convergence and generalization are also important, but the main scope of this paper is to propose kernels **going beyond commutative and separable kernels with lower computational cost** than existing operator-valued kernels with the framework of vvRKHSs.\\nSeparable and commutative kernels are two extreme cases regarding the dependencies between input and output variables.\\nFor separable kernels, the output is determined only by the global information of the input. \\nOn the other hand, commutative kernels only identify the pointwise (completely local) dependencies.\\n**The proposed kernel fill a gap between separable and commutative kernels** with lower computational cost than existing operator-valued kernels.\\nIndeed, the proposed kernel is indexed by the truncation parameter $n$, and we showed that the proposed kernels converge to the commutative kernels as $n$ goes to infinity.\\nOn the other hand, if $n=1$, then the proposed kernels are separable kernel.\\n**If $n$ is small, the proposed kernel focuses more on global information of input functions.\\nIf $n$ is large, the proposed kernel focuses more on local information of input functions.**\\nThis feature is achieved by introducing the noncommutativity to typical commutative kernels by applying the theory of spectral truncation, and is described by the property of the Fejer kernel.\\nWe revised Sections 1 and 3 and added Appendices A and B to clarify the benefit of introducing the noncommutativity.\\n\\n**[Q3] Theoretical guidance for the selection of $n$, how much $n$ is suitable for well approximated** \\nAlthough $k_n^{\\\\operatorname{poly},q}$, $k_n^{\\\\operatorname{prod},q}$, and $k_n^{\\\\operatorname{sep},q}$ can approximate $k^{\\\\operatorname{poly},q}$, $k^{\\\\operatorname{prod},q}$, and $k^{\\\\operatorname{sep},q}$, our goal is not approximating the commutative kernels $k^{\\\\operatorname{poly},q}$, $k^{\\\\operatorname{prod},q}$, and $k^{\\\\operatorname{sep},q}$.\\nAs we explained above, by setting $n$ as a finite value, we can go beyond separable and commutative kernels.\\nIndeed, the parameter $n$ controls the input and output dependencies.\\nIf $n$ is small, the proposed kernel focuses more on global information of input functions.\\nIf $n$ is large, the proposed kernel focuses more on local information of input functions.\\nThis feature is described by the property of the Fejer kernel.\\nWe can determine optimal $n$ in the sense of the dependencies by observing the Fejer function $F_n^{2q,P}$.\\nSince the proposed kernel is defined by the convolution of the input function with the Fejer kernel, the volume of the region where the value of the Fejer kernel is sufficiently large corresponds to the range of local dependencies.\\nThus, if we have a information of the local dependencies, then we can choose $n$ based on the values of $F_n^{2q,P}$.\\n\\nIn Appendix B, we added Remark B.1 about determining $n$ discussed above. We also added Figure 2 for helping understand the property of the Fejer kernel.\\n\\n**[Q4,Q5] Defference from previous results of generalization bounds** \\nThe argument regarding the representation power and model complexity is common.\\nHowever, since we proposed a new kernel, it is important to check this common property is also valid to the proposed kernel.\\nThis paper is the first step of proposing kernels that beyond separable and commutative kernels based on the noncommutativity and speatral truncation.\\nMore detailed analysis of what benefit is obtaind in the sense of generalization bound compared to other machine learning methods is future work.\"}", "{\"comment\": \"I thank the authors for their response. My primary concern with this paper lies in the unconvincing nature of its applications. They give two nice but simple demonstrations of their algorithm: data regression and an MNIST image reconstruction. In the context of operator learning (mentioned in the Appendix), the authors present an additional experiment with the Burgers' equation. While this shows some applicability, it remains a relatively simple example. If the proposed techniques were more novel, I would say that it is ok that the applications are underdeveloped. But since the techniques fairly build on well-established vector-valued RKHSs, I think the lack of developing applications for them weakens the paper's strengths.\"}", "{\"comment\": \"Dear Reviewer 33dL,\\n\\nApologies for our continuous messages. Since the end of the discussion period is approaching, we would appreciate if you could let us know whether the above answer and additional experiment address your comments or not and let us know if you have additional questions, comments, and concerns related to these points. Thank you for your time.\"}", "{\"comment\": \"Thank you very much for your additional questions, and thank you for considering improving the score.\\n\\nWe have the following three points regarding the advantages of applying the proposed kernels over applying DNNs.\\n- In many cases, kernel methods performs better than DNN when the number of samples is limited [1]. For large-scale data tasks, as we discussed in the answer of [Q6], we can use approximation methods such as the Nystrom method to reduce the computational cost with respect to $N$. \\n- One potential application of the proposed kernel is the application to operator learning (Please see Appendix D for more details). In [2], the authors show the kernel method for operator learning either matches or beats the performance of NN-based methods on a majority of benchmarks. Thus, if we apply the proposed kernel to operator learning, its performance can be better than NN-based methods.\\n- An advantage of kernel methods over DNNs is its theoretical solidness. As the authors of [2] also insist in their paper, with kernel methods, we have convergence guarantees, a priori error estimates, Bayesian uncertainty quantification, and so on.\\n\\n[1] Tingting Wang, Huayou Su, and Junbao Li, \\\"DWS-MKL: Depth-width-scaling multiple kernel learning for data classification\\\", Neurocomputing, 411: 455-467, 2020. \\n[2] Pau Batlle, Matthieu Darcy, Bamdad Hosseini, and Houman Owhadi, \\\"Kernel methods are competitive for operator learning\\\", Journal of Computational Physics, 496(1):112549, 2024.\"}", "{\"comment\": \"**[Q6] Scarability of computing $\\\\mathbf{c}(z)$**\\nThe computation of $(\\\\mathbf{G}(z)+\\\\lambda I)^{-1}\\\\mathbf{y}(z)$ is for the most standard case for obtaining $\\\\mathbf{c}(z)$ without any approximation technique.\\nWe emphasize that for the case without approximations, the computational cost is lower than vvRKHS methods with existing nonseparable operator-valued kernels such as transformable kernels.\\nAs we stated in the last paragraph of Section 5, for the proposed kernel with the framework of RKHMs, the computational cost for obtaining $\\\\mathbf{c}(z_1),\\\\ldots,\\\\mathbf{c}(z_m)$ for different points $z_1,\\\\ldots,z_m$ is $O((q+m)n^2N^2+mN^3)$.\\nOn the other hand, that for existing nonseparable operator-valued kernels with the framework of vvRKHSs is $O(m^3N^3)$.\\nThus, if $(q+m)n^2<m^3N$, then the proposed kernels are more computationally efficient than the existing kernels, such as transformable kernels.\\nIn addition, we can apply low-rank approximation such as the Nystrom method to each $\\\\mathbf{G}(z_i)\\\\in\\\\mathbb{C}^{N\\\\times N}$.\\nIn this way, we can reduce the computational cost with respect to $N$ for the proposed kernel.\\nWe revised the last paragraph of Section 5 to clarify the above point.\\n\\n**[Q7] Complex-valued and real-valued notions** \\nWe are sorry for the confusion.\\nIn practical applications, we can use complex-valued kernels.\\nHowever, only when we obtain the generalization bound, we need to restrict the kernels to be real-valued.\\nThis is because the theory of generalization bound is for real-valued functions in general.\\nWe added the explanation that we consider real-valued kernels for the generalization bound analysis before Theorem 4.1.\"}", "{\"title\": \"Comment\", \"comment\": \"Thank you for your response. It has deepened my understanding of this work. I am considering improving my score. But, I still have a problem. As maintained above, the proposed method indeed reduces computational cost. However, it still needs $\\\\mathcal{O}(N^3)$, which is not acceptable for large-scale data tasks. DNN can also capture the global and local features and has a better representation capability. What are the benefits of the proposed method? Or, what is the necessity of studying this method?\"}", "{\"comment\": \"**Regarding the comments on the presentation**\\nThank you for your suggestion about the presentation. We also revised the paper based on your comments on the presentation. \\n\\n**[P1] Rewriting the introduction with clear motivations** \\nWe rewrote the introduction so that the motivation of the proposed kernel (beyond separable and commutative kernels with lower computational cost than existing operator-valued kernel with the framework of vvRKHSs) becomes clear.\\nWe also added a table (Table 1) to explain the difference between the proposed kernel and existing kernels.\\n\\n**[P2] Clarification of the data space $\\\\mathcal{X}$ and about the sentence \\\"however, by approximating ... by a Toeplitz matrix..;\\\"** \\nSection 2.2 is purely for spectral truncation, and is not directly related to the kernel methods.\\nThus, we added the explanation of $\\\\mathcal{X}$ just before Example 3.1.\\nWe also deleted the word \\\"Toeplitz\\\" in the second sentence in Section 2.2.\\n\\n**[P3] The role of the Fejer kernel** \\nWe added the explanation of Fejer kernel before Proposition 2.6.\\n\\n**[P4] About the presentation of Lemma 3.9** \\nWe added some words in the statement of Lemma 3.9.\\n\\n**[P5] References for deriving the generalization bound** \\nWe do not agree that this result comes directly from previous literature.\\nWe confirmed that the results about generalization bound is also valid for the $C^*$-algebra-valued (function-valued) regression problem.\\nAlso, we showed that the operator-norm of the Toeplitz matrix $R_n(x)$ grows as $n$ becomes large.\\nWe agree that we need to refer previous literature for deriving the generalization bound in the main text, too.\\nThus, we added the references before Theorem 4.1.\\n\\n**[P6] Introducing m in Section 5** \\nWe added an explanation about $m$ at the beginning of Section 5.\\n\\n**[P7] Presenting the experiments as an answer to the questions/motivation of the beginning of the paper** \\nWe added the goals of the experiments in Section 7.\"}", "{\"title\": \"Thank you for your reply\", \"comment\": \"I would like to clarify the questions: The kernel-related approaches are generally sensitive to kernel choices. On the one hand, it is inefficient to try many possible kernels. However, if the kernel choice depends on only a small subset of parameters (like in RBF kernels), then the family of RKHS is also restricted by such restricted choices. How can the characterization (potentially) address the kernel choices for practical data and problems?\"}", "{\"title\": \"Feedback on the rebuttal\", \"comment\": \"I appreciate the answers provided by the authors and acknowledge the improvements proposed in the paper.\\nIn fact the presentation of the motivation, the new insights at different places of the paper, and the new discussion and clearer references significantly enhances the quality of the paper, yiedling me to improve my score.\"}", "{\"comment\": \"Thank you very much for your constructive comments. We addressed you comments and questions and revised the paper. We summarize the answers below. Please see the global comment at the top of this page for more details of the motivation of this paper.\\n\\n**[W1] Benefit from the new algebraic structures** \\nAn important benefit of applying the noncommutative algebraic structure is that we can **go beyond separable and commutative kernels with low computational cost**.\\nBy virtue of the noncommutative structure, we can generalize commutative kernels, which enables us to induce interactions along data function domain.\\nSince the kernel is function-valued, we can apply the framework of RKHMs and can obtain the final solution with the calculation of functions, which results in lower computational cost than the vvRKHS method with existing operator-valued kernels.\\n\\n**[W2] Kernel choices in practice** \\nApplying the proposed kernels, we can extract both local and global dependencies of the output\\nfunction on the input function.\\nThe proposed kernel has a parameter $n$, which control how much we focus on local dependencies.\\nThe optimal choice of $n$ depends on data and tasks.\\n**If we want to focus on the global dependencies, then $n$ should be set as a small number.\\nOn the other hand, if we want to focus on local dependencies, then $n$ should be set as a large number.**\"}", "{\"comment\": \"Thank you for your response.\\n\\nWe would like to emphasize that our proposed technique is not on the framework of vvRKHSs, but on the framework of RKHMs [1]. As we explained in the global comment, the proposed kernels fill a gap between existing operator-valued kernels: separable and commutative kernels. For the cases $n=1$ and $n=\\\\infty$, the proposed kernels are equivalent to the existing operator-valued kernels, where $n$ is the truncation parameter. However, if $1<n<\\\\infty$, they are different from existing operator-valued kernels, but they are **function-valued** kernels ($\\nC^*$-algebra-valued kernels). We combined the proposed kernels with the framework of RKHMs. The reason why we can reduce the computational cost compared to the existing operator-valued kernels is that we proposed function-valued kernels and applied the framework of RKHMs. Please see table 1 and Section 5 for more details.\\n\\nThe framework of RKHMs is a newly developed framework, and this paper gives a potential power of the application of RKHMs. This is the first paper that proposed function-valued kernels in the framework of RKHMs and showed its computational efficiency over the framework of vvRKHSs.\\n\\n[1] Yuka Hasimoto, Masahiro Ikeda, and Hachem Kadri, $C^*$-Algebraic Machine Learning \\u2212 Moving in a New Direction, ICML 2024.\"}", "{\"comment\": \"Thank you very much for your comments and questions, and thank you for updating the score. If you have any additional comments, questions, and concerns, please let us know.\"}", "{\"comment\": \"Thank you for the additional questions.\\n\\nRegarding the deep methods with kernels, they are also suited to the case where the number of samples are limited. An advantage of the deep methods with kernels is their connection with benign overfitting. As discussed in Section 6.2 in [3], we can learn models so that they overfit benignly if the number $L$ of layers is $L\\\\ge 2$. \\nRegarding the comparison with the theory of DNNs, not only the theoretical analysis of kernel methods is simpler, but also the deep model with kernels has an advantage on generalization bound.\\nAs discussed in Section 6.4 in [3], for the deep model with kernels, we can obtain a generalization bound whose dependency on the widths of the layers is smaller than those for DNNs.\\nDifferent from obtaining the deep structure by the composition of kernels, such as [1] and [3], the deep structure in our case, discussed in Section 6, is obtained by the product of the proposed kernels, and is for improving the performance. \\nWe can extract both local and global dependencies at the same time by considering the product of the kernels.\\nWe can also apply the proposed kernels to the deep structure discussed in [1,3], and then we can have the same advantages of the deep methods discussed in [1,3], as discussed above.\\n\\n[3] Yuka Hashimoto, Masahiro Ikeda, and Hachem Kadri, \\\"Deep learning with kernels through RKHM and the Perron-Frobenius operator\\\", NeurIPS 2023.\"}" ] }
5GI6BGToyw
AtmosArena: Benchmarking Foundation Models for Atmospheric Sciences
[ "Tung Nguyen", "Prateik Sinha", "Advit Deepak", "Karen A. McKinnon", "Aditya Grover" ]
Deep learning has emerged as a powerful tool for atmospheric sciences, showing significant utility across various tasks in weather and climate modeling. In line with recent progress in language and vision foundation models, there are growing efforts to scale and finetune such models for multi-task spatiotemporal reasoning. Despite promising results, existing works often evaluate their model on a small set of non-uniform tasks, which makes it hard to quantify broad generalization across diverse tasks and domains. To address this challenge, we introduce AtmosArena, the first multi-task benchmark dedicated to foundation models in atmospheric sciences. AtmosArena comprises a suite of tasks that cover a broad spectrum of applications in atmospheric physics and atmospheric chemistry. To showcase the capabilities and key features of our benchmark, we conducted extensive experiments to evaluate two state-of-the-art deep learning models, ClimaX and Stormer on AtmosArena, and compare their performance with other deep learning and traditional baselines. By providing a standardized, open-source benchmark, we aim to facilitate further advancements in the field, much like open-source benchmarks have driven the development of foundation models for language and vision.
[ "foundation models", "atmospheric sciences", "benchmarks" ]
Reject
https://openreview.net/pdf?id=5GI6BGToyw
https://openreview.net/forum?id=5GI6BGToyw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWRZhfBW3t", "zGkGnnerca", "ykH2RMGS8d", "v9d0aNqPEi", "uTN1CNkgup", "tJ92RGgCCw", "t9QemtqhSw", "o423t68D2u", "mNHFxFBYoz", "kkW2utXfx0", "k5sMHespX9", "jO0mqhmu8J", "hTFqvYPRKN", "fgmJ75gZ86", "d9fLSMsLWw", "VDnvWnKB9s", "VBU0mvp73C", "OGSixDR4A6", "Mptu1VxrFB", "IdzT6ltdLo", "GmPCUeRhUe", "CuTvL2KSQh", "A6cMAZJjcw", "3S5CE2dQv0", "31u9IeVvAF" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730368001548, 1733163275543, 1730211738600, 1732268756318, 1732510225366, 1732654474487, 1732474682014, 1732928765948, 1732474811283, 1734315143539, 1732654379643, 1732268837095, 1733170061491, 1737524225702, 1730719512317, 1732474554684, 1732654577178, 1732268964082, 1732468045340, 1730571370792, 1732523600639, 1733079659810, 1732474973461, 1732269044923, 1732628277763 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_9H2z" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_XwUv" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_xs49" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Area_Chair_dXHZ" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_eLRx" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Area_Chair_dXHZ" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_xs49" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_9H2z" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Authors" ], [ "ICLR.cc/2025/Conference/Submission12939/Reviewer_XwUv" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a benchmark suite in which they collect several benchmark datasets into one framework which therefore offers a broader set of tasks to evaluate foundation models in weather and climate. Using their AtmosArena setup, the authors evaluate two prominent foundation models, ClimaX and Stormer on the set of atmospheric phsysical tasks and highlight performance\\ndifferences that underline the usefulness of the benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The set of tasks under this new benchmark suite is larger than any of the individual existing frameworks and therefore offers the chance for a more in depth comparison of foundation models across tasks. The paper is clearly written and structured such that it is easy to follow.\", \"weaknesses\": \"In my opinion a shortcoming of the proposed framework is that it is more a collection of existing benchmark frameworks like Weatherbench, ClimateBench, and ClimateNet and misses some opportunities for improvement. This statement to me is not about \\\"novelty\\\", but rather remaining shortcomings or inconsistencies under this new framework.\\n\\n- While you state that in the future you would like to have a public\\n leaderboard, I think this is already something to be expected from such a\\n framework. Something like the google [Weatherbench leaderboard](https://sites.research.google/weatherbench/) is a decent\\n refrenece to get a first idea of perfomrance across models\\n- For the task of climate down scaling you employ the RMSE, Bias and Pearson\\n correlation as available metrics, however, especially here, metrics like Power\\n Spectral Density (PSD), and PSD plots are important and offer more insights than a single error metric value\\n- in fact, for all climate related prediction tasks (Forecasting, Super-resolution, inpainting mentioned in your paper) spectral metrics are\\n commonly employed in the atmospheric science literature and should be a part of an atmospheric benchmark\\n- as another point, I think not including any probabilistic metrics across the\\n collected tasks is unfortunate because simple point estimates just come short\\n of the complexities given these tasks. A notion of prediction uncertainty and\\n an assessment of the predictive uncertainty with metrics like proper scoring\\n rules seems essential when aiming to do an holistic analysis. Probabilistic metrics were for example included in Weatherbench2 but seem to be missing from your framework. \\n\\nGiven that from my understanding you have collected existing frameworks under a new benchmark suite, I would have expected some additional improvements about the shortcomings of those, especially surrounding additional evaluation metrics as this is such an important part of benchmarking. While the employed metrics are very common in machine learning, I think additional metrics like PSD that exist in the atmospheric science literature are essential for a good benchmark framework.\", \"questions\": \"Paper Questions:\\n- in section 4.5, Table 4, the numbers reported for ClimaX, ClimaX frozen and\\n ClimateBench-NN are exactly the same as the values reported in the ClimaX\\n publication Table 2, which I do find a bit surprising that there is no\\n statistical difference given the non-deterministic nature of Deep Learning\\n model training and a rather small data size\\n- is there a reference for the Spectral Divergence metric you use, or is this a metric you propose in this work?\\n- Line 18, you say the \\\"first multi-task benchmark\\\", but in table 1 you list\\n existing works that consider \\\"multiple atmospheric tasks\\\" so I don't think\\n your claim of \\\"first\\\" holds, but maybe you can clarify?\", \"general_comment\": \"- I believe a work like this is highly dependent on the quality of the code,\\n this is difficult to assess in this review process which is very unfortunate\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer eLRx,\\n\\nIt's been 10 days since we posted our rebuttal. **As today is the last day the reviewer can respond to us, we sincerely hope the reviewer will provide feedback on our rebuttals soon because it is critically important to our work.** Our rebuttals have addressed the reviewer's two main points: support additional metrics like FLOPs and model parameters and provide the source code. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards, \\nThe authors of paper 12939\"}", "{\"summary\": \"This article proposes a new benchmark for evaluating atmospheric and weather foundation models. It comprises the following tasks:\\nweather forecasting, S2S forecasting, climate data infilling, climate model emulation, climate downscaling, and extreme weather events detection with several metrics for each task. datasets, fine-tuning protocols, evaluation code, standardized metrics, and traditional and machine learning baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper presents a standardized benchmark for climate and weather evaluation with data, metrics, code, and baselines. It evaluates all consistently, transparently, in a way that it is easy to reproduce and that will facilitate and boost the research in this area.\", \"weaknesses\": \"AtmosArena, does not offer a pertaining dataset, only evaluation.\\n\\nIt is not clear to me why we need a new benchmark. In the related work section of benchmark three should be a clear comparison with the other benchmarks and why this one is better and needed.\", \"questions\": \"AtmosArena has a great overlap with ClimateLearn, ClimaX, and Aurora. Almost all the tasks and datasets except for ClimateNet and Berkeley Earth are already in the other datasets. Is it that you created new annotations, or is it easier to use? Why do we need AtmosArena and not to test on the other ones directly?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Rebuttal\", \"comment\": \"We thank the reviewer for your constructive feedback and for recognizing the novelty, reproducibility, and diversity of tasks AtmosArena offers. We address each specific concern below.\\n\\n> The authors should add relevant metrics, such as inference FLOPs and model parameters\\n\\n**We have added the inference FLOPs and model parameters of the three baselines (ClimaX, Stormer, and UNet) to the updated submission**. The table below summarizes the numbers.\\n\\n| | ClimaX | Stormer | UNet |\\n|---|---|---|---|\\n| FLOPs | 986.098B | 7377.751B | 969.404B |\\n| Parameter count | 110.842M | 468.752M | 577.745M |\\n\\n> The authors need to provide more evidence (e.g., frameworks and code) to demonstrate the benchmark\\u2019s ease of use and reproducibility.\\n\\n**We have submitted the code as supplementary material**. The code provides concrete instructions on reproducing the paper's results and using individual components like data and metrics.\\n\\n> Although the authors conducted extensive experiments to show that no single model excels across all tasks, I believe they should further analyze the experimental results rather than simply testing performance across different tasks.\\n\\nWe believe one important aspect of a multitask benchmark like AtmosArena is to compare the performance of different methods across a diverse set of tasks. Through comparisons in diverse tasks and settings, we have also observed interesting and consistent patterns that are useful for future development of foundation models:\\n- ClimaX tends to perform better in forecasting at longer lead times (S2S scale) than Stormer which is explained by the wider range of lead times used for pretraining ClimaX.\\n- Multi-source pretrained models like ClimaX tend to perform better at downstream tasks that are significantly different from pretraining, including climate model emulation and extreme weather events detection.\\n- Freezing the transformer backbone tends to work better in tasks with little data, which demonstrates the transferability of pretrained backbones to downstream tasks.\\n\\nIf you have any specific analysis in mind, please let us know. We are glad to run them during the rebuttal phase.\\n\\nWe thank the reviewer again for the constructive feedback and continued support. We believe the additional metrics and the published source code have significantly improved the paper, and **we sincerely hope that the reviewer can take our effort during the rebuttal into account** in the final assessment of the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to respond to my queries. I agree that the ML tasks are fairly comprehensive but disagree that any of the tasks can be categorized as atmospheric chemistry. The tasks labelled as such are just downscaling or forecasting other atmospheric tracers, which are transported physically. I appreciate that the term 'foundation model' has been taken up in other domains but still disagree that the tasks represented here (or performed by e.g. ClimaX) represent the same level of generalization as e.g. LLMs. I feel the benchmark is a useful iteration on other benchmarks, but nothing more and stand by my score.\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for the constructive feedback and for recognizing the useful contribution of our work. We will keep expanding the benchmark with new tasks, datasets, and models to improve its diversity and applicability.\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer xs49,\\n\\nWe appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period (two days left), we would greatly appreciate your feedback on our rebuttals. We believe our rebuttals have addressed the reviewer's two main points: clarify what foundation models for atmospheric sciences mean and why the tasks we chose in the paper are complementary from both machine learning and atmospheric science perspectives. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards,\\nThe authors of paper 12939\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer eLRx,\\n\\nIt's been more than a week since we posted our rebuttal. We appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period, we would greatly appreciate your feedback on our rebuttals. Our rebuttals have addressed the reviewer's two main points: support additional metrics like FLOPs and model parameters and provide the source code. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards,\\nThe authors of paper 12939\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer 9H2z,\\n\\nWe appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period (two days left), we would greatly appreciate your feedback on our rebuttals. We believe our rebuttals have addressed the reviewer's three main points: 1) create a leaderboard for the tasks supported by AtmosArena, 2) add PSD plots for the downscaling task, and 3) submit the source code to reproduce the results in AtmosArena. We spent significant time and effort on the additional results and analysis and we are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards,\\nThe authors of paper 12939\"}", "{\"metareview\": \"The authors propose a new benchmark to evaluate atmosphere foundation models. The benchmark evaluates models on different tasks on 4 already existing data sources and measure a wide variety of metrics.\\n\\nReviewers found this to be valuable for the research community in order to measure progress in the field of atmospheric sciences.\", \"two_important_weaknesses_for_the_reviewers_were\": \"* The lack of code (eLRx, 9H2z) \\n* The significance of the contributions with respect to previous benchmarks. For example, eLRx requests for further analysis of the results\\n xs49 asks for the contribution of this work besides being a \\u201ccollection of benchmarks\\u201d, the lack of diversity of the tasks (9H2z), and XwUv \\nrequests a better explanation on the differentiating factor of the proposed benchmark.\\n\\nThe authors provided the code during the rebuttal, but reviewer 9H2z was unable to install the dependencies, raising some concerns about\\nthe state of the code. \\n\\nWith respect to the contribution of this work, in response to 9H2z, the authors added two additional metrics: \\n\\nPSD plots and Spectral Divergence (SpecDiv). And further explanation of the differences with respect to previous evaluations in response\\nto XwUv.\\n\\n**Overall** this work is clearly in the borderline, even considering a score update by eLR, who did not engage in discussion, from 5 to 6 and \\ngiven remaining concerns by 9H2z during the reviewer / AC discussion, I cannot recommend the acceptance of this work. However, I encourage the authors to polish the paper, improve the codebase, and make the arena more usable.\", \"additional_comments_on_reviewer_discussion\": \"During the reviewer / AC discussion 9H2z expressed concerns about not able to even install dependencies and the fact that this work is too oriented towards error metrics.\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer eLRx,\\n\\nWe appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period, we would greatly appreciate your feedback on our rebuttals. We believe our rebuttals have addressed the reviewer's two main points: support additional metrics like FLOPs and model parameters and provide the source code. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards, The authors of paper 12939\"}", "{\"title\": \"Authors' Rebuttal\", \"comment\": \"We thank the reviewer for your constructive feedback and for appreciating the contributions, well-chosen data, tasks, and baselines of AtmosArena, and the clear writing of our paper. We address each specific concern below.\\n\\n> It isn't entirely clear that any of the presented models are foundation models; it feels like the 'atmosphere' is a bit too specific of a system to qualify as foundational.\\n\\n**What we mean by \\u201cfoundation models for the atmosphere\\u201d is models that can solve various tasks in atmospheric sciences**, including forecasting, downscaling, extreme weather events detection, etc. We adopt the term \\u201cfoundation\\u201d similarly to how people have been referring to GPT-x or Llama as language foundation models or Stable Diffusion as an image foundation model. **Foundation models are also ubiquitous in other scientific fields** such as EVO for genome, Prithvi for Earth science and remote sensing, AstroCLIP for astrophysics, etc. The fact that both ClimaX and Stormer outperform the non-pretrained baseline UNet in many tasks shows their capability of generalizing to different downstream tasks.\\n\\n> The paper could do a better job of explaining why the various tasks are complementary, e.g. because they all rely on some underlying understanding of the covariance of the atmosphere over different spatial and temporal scales (be explicit).\\n\\n**The tasks we chose are complementary from both atmospheric and machine learning perspectives:**\\n- From a domain perspective, atmospheric tasks are broadly categorized into atmospheric physics or atmospheric chemistry, and AtmosArena supports both categories.\\n- From a machine learning perspective, tasks in AtmosArena are mapped to well-defined problems in machine learning: forecasting, segmentation, super-resolution, inpainting, and counterfactual prediction. As the reviewer mentioned above, these tasks together test a model\\u2019s understanding of the atmosphere over different spatial and temporal scales.\\n\\nWe thank the reviewer again for the constructive feedback and continued support. We believe the reviewer\\u2019s questions and suggestions have significantly improved the paper, and **we sincerely hope that the reviewer can take our discussion into account** in the final assessment of the paper.\"}", "{\"comment\": \"We thank the reviewer for your feedback on our rebuttals. We address the remaining concerns below.\\n\\n> PSD metrics are relevant for all sorts of other spatio-temporal prediction tasks\\n\\nWe agree. Indeed, **the PSD metrics in AtmosArena are applicable across all tasks in our benchmark suite. Our presentation of PSD plots for climate downscaling serves to demonstrate that our benchmark framework fully supports these metrics.** We will include an appendix section in the revised manuscript showing PSD metrics for other tasks.\\n\\n> For example your updated manuscript regarding Climate Downscaling reads in line 406: \\\" Stormer is the best model in this task with the lowest RMSE and Absolute Mean Bias for most variables, followed by the Unet baseline\\\". But is that the case if you consider the PSD plots?\\n\\nFigure 2 in our paper shows that there is no clear winner in terms of PSD metrics, and the ranking of the baselines varies across different data samples and. variables. Given that Stormer outperforms the other two baselines significantly in RMSE, we can still conclude that Stormer is the best method for this task.\\n\\n> I would also argue that a comprehensive benchmarking framework should \\\"expose\\\" such shortcomings\\n\\nWe will add the CRPS to Table 1 and discuss the lack of uncertainty estimation of existing methods in the updated manuscript. However, **this does not present a weakness of our benchmark, but rather limitations of existing methods.** \\n\\n> I would argue that for a community benchmark framework, it should include docstrings for all functions and classes which seems to be the case for some but not others. I attempted to run the conda installation and code but got\\n\\n**We respectfully disagree with judging the submission based on code documentation at this stage.** The paper should be evaluated on its primary contributions: establishing a benchmark framework for atmospheric foundation models, defining comprehensive tasks and metrics, and demonstrating the framework's capabilities through extensive experiments. While we acknowledge the importance of complete code documentation, refinement of the codebase - including improved docstrings - will naturally follow publication, as is standard practice in the field. The scientific merits of the paper itself should be the focus of the current evaluation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces AtmosArena, a benchmark designed for evaluating foundation models in atmospheric sciences, focusing on multi-task spatiotemporal learning. The paper mainly evaluates two prominent models, ClimaX and Stormer, comparing their performance with traditional baselines across various tasks. Stormer performs well in short-term forecasting, while ClimaX excels in tasks with longer temporal horizons, highlighting the benefits of multi-source pretraining. Overall, the authors demonstrate that pre-trained models generally outperform task-specific baselines, and AtmosArena serves as a comprehensive tool for advancing atmospheric modeling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **New Perspective:** The article evaluates atmosphere modeling from a multitasking perspective and verifies the effectiveness of two pre-trained models: ClimaX (multi-source pre-trained models) and Stormer (single-source pre-trained models).\\n2. **Open Source:** By providing a standardized and open-source framework, AtmosArena sets a new standard for reproducibility and transparency in multi task atmospheric learning.\\n3. **Include Finetuneing:** The paper explores the finetuning protocols for foundation models, comparing frozen versus fully finetuned backbones.\\n4. **Diverse Tasks:** AtmosArena benchmark includes both atmospheric physics and atmospheric chemistry tasks, and the benchmarks utilize well-regarded datasets like ERA5, ClimateBench, and ClimateNet.\", \"weaknesses\": \"1. Regarding the completeness of the benchmark, I believe the authors should add relevant metrics, such as **inference FLOPs and model parameters**.\\n\\n2. The authors need to provide more **evidence (e.g., frameworks and code)** to demonstrate the benchmark\\u2019s ease of use and reproducibility.\\n\\n3. Although the authors conducted extensive experiments to show that no single model excels across all tasks, I believe they should **further analyze** the experimental results rather than simply testing performance across different tasks.\", \"questions\": \"See the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer eLRx,\\n\\nWe appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period (two days left), we would greatly appreciate your feedback on our rebuttals. We believe our rebuttals have addressed the reviewer's two main points: support additional metrics like FLOPs and model parameters and provide the source code. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards,\\nThe authors of paper 12939\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for the constructive feedback and for recognizing the contribution of our work to the community.\"}", "{\"title\": \"Authors' Rebuttal\", \"comment\": \"We thank the reviewer for your constructive feedback and for appreciating the large set of tasks AtmosArena offers and the clear writing of the paper. We address each specific concern below.\\n\\n> While you state that in the future you would like to have a public leaderboard, I think this is already something to be expected from such a framework.\\n\\nWe thank the reviewer for this suggestion. **We created a leaderboard at https://atmosarena.github.io/leaderboard/ for the tasks we consider in this work**. The leaderboard can be expanded in the future to include more models, tasks, and evaluation metrics. We have also added a link to the leaderboard to the paper.\\n\\n> Metrics like Power Spectral Density (PSD), and PSD plots are important and offer more insights than a single error metric value\\n\\nWe thank the reviewer for this great suggestion. **We have added the PSD plots for the downscaling task in the updated submission**. To create these plots, we computed the 2D Power Spectral Density using the Fast Fourier Transform (FFT) for each spatial field, then performed radial averaging to obtain 1D PSD curves that show how power varies with spatial frequency. For each variable (T2M, Z500, T850), we plotted the PSD curves of the ground truth and predictions from three models (ClimaX, Stormer, UNet) on a log-log scale.\\n\\nThe PSD plots (Figure 2 in the paper) show that all three models show excellent agreement with the ground truth across low to medium spatial frequencies for all variables, indicating they accurately capture large-scale spatial patterns. However, there are notable differences at high spatial frequencies (>0.2): UNet tends to underestimate the power at these frequencies, suggesting it may smooth out fine-scale details, while ClimaX and Stormer better preserve these high-frequency components. The results suggest that two pretrained models, ClimaX and Stormer, have an advantage in preserving fine-scale spatial details compared to UNet.\\n\\n> I think not including any probabilistic metrics across the collected tasks is unfortunate because simple point estimates just come short of the complexities given these tasks\\n\\nWe provided the probabilistic metrics, including CRPS and Spread-skill ratio in the source code. **We did not include any probabilistic metrics in the paper only because all models we consider: ClimaX, Stormer, and UNet, are deterministic models**. Even the two newer foundation models Aurora and Prithvi WxC are deterministic. We believe this is not a limitation of our evaluation framework but rather a limitation of existing models. We can always benchmark future models on uncertainty metrics if they provide probabilistic predictions.\\n\\n> I would have expected some additional improvements about the shortcomings of those, especially surrounding additional evaluation metrics\\n\\nIn addition to the PSD plots the reviewer suggested, we also provide Spectral Divergence (SpecDiv), a physics-based metric not considered by previous works.\\n\\n> In section 4.5, Table 4, the numbers reported for ClimaX, ClimaX frozen and ClimateBench-NN are exactly the same as the values reported in the ClimaX publication Table 2\\n\\nFinetuning ClimaX is expensive because we have to finetune one model for each target variable in ClimateBench, so we used the model checkpoints provided by the ClimaX authors as is. Since the model itself is deterministic and we used the same train and test splits as ClimaX, we were able to reproduce the numbers.\\n\\n> Is there a reference for the Spectral Divergence metric you use, or is this a metric you propose in this work?\\n\\nTo the best of our knowledge, Spectral Divergence was first proposed for image analysis [1] and ChaosBench [2] was the first paper that used it for weather and climate.\\n\\n[1] Chang, Chein-I. \\\"Spectral information divergence for hyperspectral image analysis.\\\" IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS'99 (Cat. No. 99CH36293). Vol. 1. IEEE, 1999.\\n\\n[2] Nathaniel, Juan, et al. \\\"Chaosbench: A multi-channel, physics-based benchmark for subseasonal-to-seasonal climate prediction.\\\" arXiv preprint arXiv:2402.00712 (2024).\\n\\n> I believe a work like this is highly dependent on the quality of the code\\n\\n**We have submitted the source code as supplementary material**. The code has detailed instructions on reproducing the experiments in the paper. We also developed individual components such as models, data, and metrics in a way that can be used independently.\\n\\nWe thank the reviewer again for the constructive feedback and continued support. We spent significant efforts in providing additional metrics, published source code, and the public leaderboard, which we believe have significantly improved the paper. **We sincerely hope that the reviewer can take our effort during the rebuttal into account** in the final assessment of the paper.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}", "{\"summary\": \"The authors present a comprehensive collection of benchmarks, with appropriate baseline models, for tasks related to atmospheric science and prediction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"While each component of the dataset is not novel, the combination and the testing of multi-model foundational models make this a useful contribution. The baselines are well chosen and the data appears to be well structured (though I have not tested this). The paper is well structured and well written. In particular, each of the sub-tasks defined in the 'arena' are appropriate and complementary, and the baselines are appropriate and test interesting aspects of their generalizability.\", \"weaknesses\": \"It isn't entirely clear that any of the presented models are foundation models; it feels like the 'atmosphere' is a bit too specific of a system to qualify as foundational. To some extent this is something the cited models need to demonstrate, but nonetheless, to demonstrate its utility, this paper should describe what they mean by a foundation model for the atmosphere, and why the tasks they present (within a fairly narrow range of spatial and temporal scales) would test such foundational knowledge. Relatedly, the paper feels a bit like a collection of benchmarks and could do a better job of explaining why the various tasks are complementary, e.g. because they all rely on some underlying understanding of the covariance of the atmosphere over different spatial and temporal scales (be explicit).\", \"questions\": \"Please explicitly describe how the presented tasks complement each other and aim to test the necessary variables, and spatial and temporal scales to qualify a model as 'foundational'.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"I thank the authors for the additional effort they have put in, especially regarding the leaderboard. However, I have some remaining remarks:\\n\\nRegarding the PSD metrics, I apologize for being unclear. My point was not specific to climate downscaling as PSD metrics are relevant for all sorts of other spatio-temporal prediction tasks some of which are included in your framework, such as precipitation prediction in climate emulation. My goal is to underline that the evaluation needs to be more holistic than what the Machine Learning lense currently does, predominantly using distance RMSE metrics. For example your updated manuscript regarding Climate Downscaling reads in line 406: \\\" Stormer is the best model in this task with the lowest RMSE and Absolute Mean Bias for most variables, followed by the Unet baseline\\\". But is that the case if you consider the PSD plots? I am not saying that your task is to find the best model, but I would argue that evaluation needs to be a lot more nuanced.\", \"regarding_probabilistic_metrics\": \"I understand your point about the models being deterministic. While you argue rightfully that this is a shortcoming of the models, I would also argue that a comprehensive benchmarking framework should \\\"expose\\\" such shortcomings, because a benchmark should not just test what models can already do but set a higher bar. For example Table 1 could include the probabilistic metrics like CRPS with a note that none of the models are able to do probabilistic forecasts. But at the moment the word \\\"uncertainty\\\" appears once in your manuscript, and with no probabilistic metric mentioned or discussion about uncertainty in forecasting or benchmarking, I would argue that this \\\"weakness\\\" remains.\", \"regarding_the_quality_of_the_code\": \"Thank you for including the source code in the zip. I did not expect a discussion about code quality but at the minimum I would argue that for a community benchmark framework, it should include docstrings for all functions and classes which seems to be the case for some but not others. I attempted to run the conda installation and code but got:\\n\\n```\", \"pip_subprocess_error\": \"\", \"error\": \"No matching distribution found for enrich==0.1.dev82\\n\\nfailed\", \"condaenvexception\": \"Pip failed\\n```\\non Ubuntu 20.04.\\n\\nI would like to update my score to 5 to reflect the additional work that the authors have put in, however, would not change my score to \\\"acceptance\\\".\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer eLRx,\\n\\nIt's been more than a week since we posted our rebuttal. We appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period, we would greatly appreciate your feedback on our rebuttals. Our rebuttals have addressed the reviewer's two main points: support additional metrics like FLOPs and model parameters and provide the source code. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards, \\nThe authors of paper 12939\"}", "{\"title\": \"Reminder of our rebuttals\", \"comment\": \"Dear Reviewer XwUv,\\n\\nWe appreciate that you are likely to review multiple other papers; however, as we approach the end of the discussion period (two days left), we would greatly appreciate your feedback on our rebuttals. We believe our rebuttals have addressed the reviewer's two main points: explain the need for AtmosArena as a unified benchmark for atmospheric foundation models, and elaborate on the contributions of AtmosArena compared to previous works like ClimateLearn, Aurora, and ClimaX. We are happy to address any remaining concerns or questions to improve the paper further.\\n\\nKind regards, \\nThe authors of paper 12939\"}", "{\"title\": \"Authors' rebuttal\", \"comment\": \"We thank the reviewer for your constructive feedback and for recognizing our work\\u2019s consistency, transparency, and reproducibility. We address each specific concern below.\\n\\n> AtmosArena, does not offer a pretraining dataset, only evaluation.\\n\\n**We believe standardizing pretraining data is neither feasible nor desirable** - developers of atmospheric foundation models should have the flexibility to select their own datasets, resolutions, variables, and pretraining tasks, as these choices are fundamental to model development. However, all foundation models should demonstrate strong performance across diverse applications. **This mirrors established practices in NLP, computer vision, and other scientific domains like protein modeling**, where models train on varied datasets but are evaluated against common benchmarks. AtmosArena provides this standardized evaluation framework for atmospheric sciences.\\n\\n> Why do we need AtmosArena?\\n\\nExisting foundation models, including ClimaX, Aurora, and the recently published Prithvi WxC, all benchmark their models on non-overlapping sets of tasks, making comparisons difficult. AtmosArena aims to provide a single, open-source, multitask framework for benchmarking current and future foundation models, **making it both easier and fairer to compare models and assess progress in the field**. Having a standardized benchmark also helps improve expandability, since we can keep adding new tasks, datasets, and metrics to AtmosArena for evaluating future foundation models. A similar effort in NLP, **the GLUE benchmark, unified diverse language tasks and datasets proposed in other papers into a single evaluation suite, driving significant progress in the field**. AtmosArena aspires to bring the same impact to foundation models for climate and atmospheric sciences.\\n\\n> Comparison with ClimateLearn.\\n\\n**AtmosArena extends ClimateLearn significantly in many dimensions:**\\n- More tasks and evaluation metrics: ClimateLearn supports 3 tasks - weather forecasting, climate downscaling, and climate projection, while AtmosArena additionally provides 5 more tasks - S2S forecasting, extreme weather events detection, climate data infilling, atmospheric chemistry downscaling, and air composition forecasting. In terms of evaluation metrics, ClimateLearn uses RMSE, ACC, and Pearson correlation, while AtmosArena additionally supports SpecDiv for forecasting, and IoU, Precision, Recall, F-1 score, and Specificity for extreme weather events detection.\\n- More datasets: ClimateLearn supports 4 datasets - ERA5, CMIP6, ClimateBench, and PRISM. AtmosArena does not have PRISM, but supports Berkeley Earth for infilling, ClimateNet for extreme weather events detection, CAMS Analysis for air composition forecasting, and GEOS-CF for atmospheric chemistry downscaling. Our ERA5 dataset is also at a higher resolution (1.40625deg) compared to ClimateLearn (5.625deg).\\n- Much stronger models: ClimateLearn implements three standard deep learning models, Resnet, Unet, and ViT, while AtmosArena includes ClimaX and Stormer, two strong models for atmospheric science.\\n\\n> Comparison with ClimaX and Aurora\\n\\nAtmosArena supports climate data infilling, extreme weather events detection, and atmospheric chemistry downscaling, which were not considered by either ClimaX or Aurora. In terms of datasets, we additionally support Berkeley Earth for infilling, ClimateNet for extreme weather events detection, and GEOS-CF for atmospheric chemistry downscaling. AtmosArena also provides two standard finetuning protocols to evaluate foundation models that were not studied carefully in previous works.\\n\\nWe thank the reviewer again for the constructive feedback and continued support. We believe the reviewer\\u2019s questions and suggestions have significantly improved the paper, and **we sincerely hope that the reviewer can take our discussion into account** in the final assessment of the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks to the reviewers for their clarifications. I also appreciate the effort made to address the other reviewers' concerns. Selecting, standardizing, extending, and improving datasets is helpful for the community and requires a lot of work. I appreciate that you have built upon existing benchmarks and improved them. Accordingly, I stand with my score of marginally above the acceptance threshold.\"}" ] }
5G9PrHERql
F2M-Reg: Unsupervised RGB-D Registration with Frame-to-Model Optimization
[ "Zhinan Yu", "Zheng Qin", "Yijie Tang", "Yongjun Wang", "Renjiao Yi", "Chenyang Zhu", "Kai Xu" ]
This paper focuses on training a robust RGB-D registration model without ground-truth pose supervision. Existing methods usually adopt a pairwise training strategy based on differentiable rendering, which enforces the photometric and the geometric consistency between the two registered frames as supervision. However, this frame-to-frame framework suffers from poor multi-view consistency due to factors such as lighting changes, geometry occlusion and reflective materials. In this paper, we present F2M-Reg, a novel frame-to-model optimization framework for unsupervised RGB-D registration. Instead of frame-to-frame consistency, we leverage the neural implicit field as a global model of the scene and use the consistency between the input and the rerendered frames for pose optimization. This design can significantly improve the robustness in scenarios with poor multi-view consistency and provides better learning signal for the registration model. Furthermore, to facilitate the neural field optimization, we create a synthetic dataset, Sim-RGBD, through a photo-realistic simulator to warm up the registration model. By first training the registration model on Sim-RGBD and later unsupervisedly fine-tuning on real data, our framework enables distilling the capability of feature extraction and registration from simulation to reality. Our method outperforms the state-of-the-art counterparts on two popular indoor RGB-D datasets, ScanNet and 3DMatch. Code and models will be released for paper reproduction.
[ "RGB-D registation", "unsupervised learning", "frame-to-model optimization" ]
Reject
https://openreview.net/pdf?id=5G9PrHERql
https://openreview.net/forum?id=5G9PrHERql
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zUowou9NV1", "zRZp3HRLMC", "zFNlKoTxTA", "y86rmVgHUm", "vCEqMxzJFw", "pMnJXIHRgY", "nB2yO4X6Bi", "mWicSYRnrl", "l7mtVUf6dA", "l5yq63VRK0", "keqgjoi8Ki", "jlrwO6m7QC", "jAg63ChGvo", "jAb3cuCes0", "iSZh4mLfv3", "h8M5nULmzH", "ZsatAXwhux", "XHGlxYIRZm", "VYmMBOtTsD", "VCWj3PBc0E", "UAzluswsHc", "SwnJ4xEjEu", "MhwREqOUa3", "L5pat4vx8M", "GUaM56rtbo", "EzJmC5T9i9", "9tt8CWulHe", "9LbpUuhAA6", "56SchcH3RU", "4DVHq6PbJ9", "42FbNF9fZF" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730643825559, 1732543961234, 1733227020453, 1737523836804, 1730641671708, 1732367486489, 1732367872240, 1733214894330, 1730573506611, 1733215322565, 1732902256744, 1732902017231, 1732738179079, 1732367611569, 1732902065889, 1733194253628, 1734752890203, 1732367439610, 1732902137903, 1732368595337, 1732367406342, 1732542230360, 1733212543557, 1732902511932, 1733302967248, 1732367844922, 1732823371166, 1733216564586, 1732687623223, 1732624134734, 1730534192828 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_4zeD" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_4zeD" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_Sxy2" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_Qbae" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_bx3G" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Area_Chair_6wN4" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_4zeD" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_bx3G" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_Qbae" ], [ "ICLR.cc/2025/Conference/Submission7404/Authors" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_bx3G" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_Sxy2" ], [ "ICLR.cc/2025/Conference/Submission7404/Reviewer_bx3G" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a frame-to-model optimization framework guided by a neural implicit field for unsupervised RGB-D registration. By introducing the differential rendering capabilities of neural radiance fields, better pose supervision can be achieved. Meanwhile\\uff0cthis paper creates a synthetic dataset for warming up registration model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces neural radiance fields to provide global information for the registration training\\n2. It builds a synthetic dataset to pretrain the model, ensuring the rationality of pose initialization.\", \"weaknesses\": \"1. The comparison with existing work seems unfair:\\n\\n - sota methods like PointMBF are trained purely without supervision while the proposed method requires extra dataset with pose labels. In Table 1, the metrics of the proposed method looks slight better than existing unsupervised data. Does the improvement come from the benefit of including extra dataset? Please provide results without the boostrap using the process of training the registration model on the Sim-RGBD dataset for Table 1. Or please consider other ways to make a fair comparison. Will other work benefit the extra dataset with pose labels as well?\\n\\n - In line 368-370, the authors propose to evaluate more difficult setting \\u201cevaluating view pairs sampled 50 frames apart\\u201d and mentioned \\u201cHowever, due to insufficient overlap in some segments of the data for pairs sampled 50 frames apart, the evaluation significantly distorts both the mean and median values. As a result, we have chosen not to include these results in our experimental presentation.\\u201d My question: why this 50 frames apart setting? Also if the insufficient overlap pairs are excluded, \\n\\n2. The influence of the warming up dataset is not studied. How large the warming up dataset should be? How similar the warming up dataset should be to the target datasets? What\\u2019s is the influence of number objects of ShapeNet on the final metrics? For the construction of datasets, there are already many synthetic datasets for indoor scenes, such as Replica and Habitat. This paper does not demonstrate the advantages of the custom dataset compared to other datasets. \\n\\n3. The training efficiency. As NeRF is very slow in training, the proposed method should require much more computation resources than PointMBF and hard to scale.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Seems no improvement in strict metrics when compared PointMBF\", \"comment\": \"Thanks the authors for providing these extra results.\\n\\nFrom the table, we can conclude that the improvement on ScanNet of 20 frames over PointMBF mostly comes from the extra training data. From the table, I may not agree with the conclusion that \\\"Our method still outperforms PointMBF even without bootstrapping, especially it can reduce extremely erroneous registrations.\\\" In some metrics, Trans(5) Chamfer(5), the method is worse. I can say the two methods are about the same performance. \\n\\nFrom Table2 and Table4 on 3DMatch of 50 frames apart setting, it also shows without introducing extra training data, the performance of the method measured by strict metrics standard is also similar (or worse?) to PointMBF. In the setting of 20frames in Table 1, will the result of the method with introducing extra dataset also similar to PointMBF or worse\\uff1f \\n Rot5 Tran5 Chamfer 1\\nPointMBF 59.3 34.2 42.9 \\nThe method 59.0 30.4 43.2 \\n\\nYet, the method has shown improvements on erroneous cases. Among all the comparison on different datasets, the method shows improvements on ScanNet on 50 frames apart and as authors said in the setting of 20 frames, the method also shows improvements over pointMBF on less strict metrics like large error threshold (Rot10Trans10) .\"}", "{\"comment\": \"**Q1: The influence of the number of the objects in the synthetic dataset.**\", \"a\": \"We conduct an additional ablation experiment on the number of objects in Sim_RGBD by rendering a dataset containing 50/400 objects per scene. The results, tested on ScanNet under the 50 frame apart setting with 20 scenes, are presented below. The sparse distribution of objects (Sim_RGBD with 50 objects per scene) leads to insufficient geometric information in the RGB-D data of each frame, limiting the training effectiveness for the registration model.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :----------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| Sim_RGBD(400 objs) | 72.7 | 79.8 | 88.4 | 47.9 | 66.7 | 77.3 | 58.4 | 73.4 | 76.3 |\\n| Sim_RGBD(50 objs) | 59.9 | 73.2 | 88.2 | 29.5 | 50.8 | 70.6 | 41.1 | 62.9 | 69.3 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents F2M-Reg, an unsupervised RGB-D registration framework that addresses the frame-to-frame registration task by dealing with multi-view inconsistencies with bootstrapping with a synthetic dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. F2M-Reg stands out by shifting from a frame-to-frame to a frame-to-model approach for RGB-D registration, which is an extension of existing approaches in the context of unsupervised 3D vision tasks.\\n2. This use of a neural implicit field as a global scene model to capture broader scene-level information is a possible direction to handle complex conditions, such as low overlap and lighting changes, where traditional methods often fall short.\\n 3. The introduction of a synthetic bootstrapping dataset, Sim-RGBD, bridges the gap between synthetic and real-world performance in unsupervised settings, which is a notable improvement in unsupervised model initialization.\", \"weaknesses\": \"1. Although F2M-Reg is compared with several baselines, it is unclear where the improvement comes from. According to Table 4, the results without bootstrapping are not exciting enough.\\n2. In order to evaluate the effectiveness of the neural implicit field-guided mechanism, this paper needs additional experiments and comparisons with SOTA approaches without bootstrapping.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1: About the definition of 'bootstrap'.**\\n\\nWe use the term 'bootstrap' in this paper to describe the process of initializing the model for more effective unsupervised training. This process is achieved by leveraging the synthetic data. We admit that the usage of this term is somewhat inaccurate, and we will replace this term in the future version.\\n\\n**Q2: Synthetic Bootstrapping achieves more improvements on 3DMatch than ScanNet.**\\n\\nThe reasons for this inconsistency are two-fold. First, as we use ScanNet for testing in all the experiments, there is naturally a domain gap for the model trained on 3DMatch which limits its performance. Second, ScanNet contains more scenes than 3DMatch ($1045$ v.s. $71$ for training), which also affects the generalization of the model. By leveraging the pre-training, the various synthetic data and the supervised training manner helps fill this gap and improve the generalization of the model.\\n\\n**Q3: Effectiveness in dynamic scenarios.**\\n\\nYes, this paper focuses on the unsupervised RGB-D registration For the pre-training stages,in static scenarios like previous work UR&R [1], BYOC [2], LLT [3], PointMBF [4] and unsupervised RGB-D registration in dynamic scenarios is out of the scope of this paper. However, we would note that there are also neural implicit fields for dynamic scenes, which could be integrated into our pipeline to solve the dynamic problem. We will leave this as the future work.\\n\\n[1]: Mohamed El Banani, Luya Gao, and Justin Johnson. Unsupervisedr&r: Unsupervised point cloud registration via differentiable rendering. CVPR 2021, pp. 7129-7139.\\n\\n[2]: Mohamed El Banani and Justin Johnson. Bootstrap your own correspondences. ICCV 2021, pp. 6433-6442.\\n\\n[3]: Ziming Wang, Xiaoliang Huo, Zhenghao Chen, Jing Zhang, Lu Sheng, and Dong Xu. Improving rgb-d point cloud registration by learning multi-scale local linear transformation. ECCV 2022, pp. 175-191.\\n\\n[4]: Mingzhi Yuan, Kexue Fu, Zhihao Li, Yucong Meng, and Manning Wang. Pointmbf: A multi-scale bidirectional fusion network for unsupervised rgb-d point cloud registration. ICCV 2023, pp. 17694-17705.\"}", "{\"comment\": \"**Q3: The influence of the warming up dataset.**\\n\\n**About the dataset size.** To investigate the influence of the dataset size, we use $20$, $40$, $60$ and $80$ scenes to bootstrap the model and directly evaluate without fine-tuning. The results are shown in the table below. It is observed that increasing the dataset size increases the performence, but the improvements are not significant. This means the synthetic bootstrap does not rely on a large synthetic dataset.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :-----------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| Sim_RGBD(20 scenes) | 72.7 | 79.8 | 88.4 | 47.9 | 66.7 | 77.3 | 58.4 | 73.4 | 76.3 |\\n| Sim_RGBD(40 scenes) | 73.4 | 80.4 | 89.0 | 48.1 | 67.5 | 78.1 | 58.7 | 74.1 | 77.1 |\\n| Sim_RGBD(60 scenes) | 72.6 | 79.5 | 88.2 | 48.1 | 66.5 | 77.1 | 58.4 | 73.2 | 76.1 |\\n| Sim_RGBD(80 scenes) | 73.5 | 80.1 | 88.1 | 48.7 | 67.3 | 78.0 | 58.9 | 74.0 | 77.0 |\\n\\n**About the similarity.** Actually, our Sim-RGBD dataset is already very different from the real datasets. First, our dataset is constructed from ShapeNet, which contains the objects which cannot appear together in real scenes, such as planes and guitars, buses and chairs, etc. This induces inconsistent semantics and geometries between the synthetic and the real datasets. Second, our dataset puts the objects densely and randomly in the space, without collision detection or gravity, so there are severe layout differences between the synthetic and the real datasets. Nevertheless, the synthetic bootstrap still effectively provides a good initialization of the registration model, which demonstrates that the synthetic and the real datasets do not have to be very similar.\\n\\n**About other synthetic dataset.** We conducted an ablation study by training on the Replica dataset and testing on ScanNet. The results of these experiments are summarized in the table below. The findings indicate that training on Replica yields performance comparable to training on Sim_RGBD. This demonstrates the robustness and generality of our bootstrapping stage for utilizing simulation data across different datasets.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| Replica | 72.3 | 80.2 | 89.1 | 46.2 | 66.2 | 78.2 | 57.7 | 73.5 | 77.2 |\\n| Sim_RGBD | 72.6 | 79.5 | 88.2 | 48.1 | 66.5 | 77.1 | 58.4 | 73.2 | 76.1 |\\n\\n**Q4: Time efficiency.**\\n\\nYes, our method increases the training time, mainly coming from the tracking and the mapping stages during optimizing the neural implicit field and the relative poses. However, we can reduce the tracking iterations (which we use $100$ in the experiments) for a balance between accuracy and efficiency. And as shown in Tab.7 of the main paper, reducing the tracking iterations only marginally influences the performance, but effectively reduces the training time.\\nMoreover, we would also note that PointMBF uses $20\\\\times$ training data than ours. PointMBF uses all pairs which are $20$ frames apart but we sample the training samples in a consecutive manner. For this reason, our method is more data-efficient than PointMBF.\"}", "{\"comment\": \"Yes, the results presented are average values across multiple scenes. Due to the large size of the dataset, we evaluated only 74 scenes (380 in total) from the iPhone RGB-D sequences in the table above. We will include results on both the DSLR and iPhone RGB-D sequences in the future version.\"}", "{\"summary\": \"The paper introduces an unsupervised method for robust RGB-D registration without ground-truth pose supervision. Unlike prior methods that rely on frame-to-frame photometric and geometric consistency, which are often affected by lighting changes, occlusion, and reflective materials, F2M-Reg employs a frame-to-model approach. The method begins with pre-training the model on a synthetic dataset, Sim-RGBD, with ground-truth poses, and subsequently fine-tunes it on real-world datasets without ground-truth poses by leveraging a neural implicit field as a 3D scene representation for pose optimization. This approach enhances robustness against multi-view inconsistencies, as demonstrated by experimental results comparing F2M-Reg with existing methods. In summary, F2M-Reg contributes a new unsupervised RGB-D registration framework, a synthetic dataset for initial model training, and an effective frame-to-model approach, setting new benchmarks on popular RGB-D datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper identifies the limitations of frame-to-frame matching in unsupervised learning, particularly due to instabilities arising from lighting variations, occlusions, and reflective surfaces. The proposed frame-to-model matching, supported by a neural implicit field (NeRF), effectively mitigates these issues, demonstrating a significant improvement over traditional methods.\", \"weaknesses\": [\"The use of the term bootstrap in the paper is potentially misleading. In deep learning, bootstrapping generally refers to iterative self-training, where a model refines itself by generating pseudo-labels and learning from them. The training on synthetic data described in this paper aligns more with pre-training. However, the fine-tuning on real datasets with initial poses refined through NeRF could be called bootstrapping.\"], \"questions\": [\"The performance gains from Sim-RGBD bootstrapping across different datasets is not quite consistent. Can the authors provide insights into why bootstrapping appears less critical for ScanNet compared to 3DMatch?\", \"NeRF optimization, particularly joint pose and neural field optimization, typically assumes static scenes. This assumption might limit the method's performance in dynamic environments. If this is indeed a constraint, it should be acknowledged in the paper. If not, could the authors clarify why the method remains effective in dynamic scenarios?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to additional experiments\", \"comment\": \"The additional experiments present a solid demonstration for the effectiveness of the proposed method. I suggest incorporating both the TUM-RGBD and ScanNet++ results into the revised paper. Considering the thorough response, **I am pleased to raise my score to 6 (or 7 if that were an option) to reflect the good quality of the paper.**\"}", "{\"comment\": \"**Q1: About the performance of w/o warm-up registration model.**\\n\\n**A:** As already described in the paper, our frame-to-model method is more sensitive to model initialization as it needs to track the poses of more frames to optimize a neural implicit field, as we design a synthetic bootstrap method to solve this problem. To further investigate the effectiveness of our approach, we compared the results of applying synthetic warm-up to the frame-to-frame method, PointMBF. The results are shown in the tables below, where both methods were trained on 3DMatch and ScanNet. The first table presents the comparison under the 50-frame-apart setting, while the second table corresponds to the 20-frame-apart setting. Across both settings and training datasets, our method consistently outperforms the frame-to-frame baseline, even with synthetic warm-up applied.\\n\\n| | Train Set | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :---------------: | :-------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| F2F(with warm-up) | 3DMatch | 71.3 | 79.2 | 87.7 | 43.9 | 64.6 | 77.0 | 55.5 | 72.7 | 76.0 |\\n| F2M(with warm-up) | 3DMatch | 72.6 | 81.1 | 91.1 | 44.6 | 65.9 | 78.5 | 56.1 | 73.6 | 77.1 |\\n| F2F(with warm-up) | ScanNet | 75.2 | 82.5 | 90.3 | 47.4 | 68.3 | 80.5 | 58.8 | 76.5 | 79.4 |\\n| F2M(with warm-up) | ScanNet | 77.4 | 84.5 | 92.5 | 50.0 | 70.6 | 82.1 | 61.5 | 77.6 | 80.9 |\\n\\n| | Train Set | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :---------------: | :-------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| F2F(with warm-up) | 3DMatch | 96.1 | 98.5 | 99.6 | 81.7 | 94.5 | 98.4 | 91.2 | 97.7 | 98.5 |\\n| F2M(with warm-up) | 3DMatch | 96.3 | 98.7 | 99.7 | 81.9 | 94.7 | 98.5 | 91.4 | 97.9 | 98.6 |\\n| F2F(with warm-up) | ScanNet | 97.3 | 99.0 | 99.7 | 83.7 | 95.5 | 98.7 | 92.5 | 98.3 | 98.8 |\\n| F2M(with warm-up) | ScanNet | 97.6 | 99.1 | 99.8 | 85.5 | 95.8 | 98.8 | 93.1 | 98.4 | 98.9 |\\n\\nIt is also worth mentioning that our method was trained for only 1 epochs on 3DMatch, whereas PointMBF was trained for 12 epochs. To provide a clearer comparison, we report the results after training for 4 epochs without bootstrapping in the following table. As shown, our method continues to demonstrate improvements, further validating its effectiveness.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 59.3 | 62.5 | 76.6 | 34.2 | 47.9 | 61.6 | 42.9 | 55.8 | 60.2 |\\n| F2M (1 epoch) | 59.0 | 71.3 | 86.3 | 30.4 | 52.3 | 68.0 | 43.2 | 62.1 | 66.5 |\\n| F2M (4 epochs) | 66.3 | 77.5 | 90.2 | 36.5 | 58.7 | 74.8 | 48.5 | 68.7 | 73.4 |\"}", "{\"comment\": \"Thank you very much for recognizing our work and for the honor of us being able to address your concerns!\"}", "{\"title\": \"Summary of changes on rebuttal revision\", \"comment\": [\"Dear reviewers,\", \"We have made several modifications based on the your feedbacks and re-uploaded the PDF. The changes are summarized as follows:\", \"For **Reviewer 4zeD**, we updated all tables in the paper to report both the mean and median values of the registration models.\", \"For **Reviewer Sxy2**, we added an **Ablation on Different Module Combinations** in the Appendix to emphasize the paper's core contribution. We demonstrate that the F2M approach enables more effective unsupervised training of the registration model, with a synthetic warm-up mechanism addressing the sensitivity of F2M's neural field to initial poses.\", \"For **Reviewer Qbae**, we replaced the term \\\"bootstrap\\\" with \\\"warm-up\\\" throughout the paper to better describe the pre-training process. This modification is more pertinent to the claim that pre-training on a synthetic dataset provide a better initialization for F2M approach to achieve better performance.\", \"For **Reviewer bx3G**, we added a comparison table between F2M-Reg and PointMBF on TUM RGB-D in the Appendix. Additionally, we included a **Limitation** section in Section 6 to acknowledge and discuss potential weaknesses in our method.\", \"We will continue to address the reviewers' concerns comprehensively and provide detailed responses.\"]}", "{\"comment\": \"**Q1: Where the improvements come from.**\\n\\nWe have separately studied the improvements from each component in Tab.2, Tab.3, Tab.4 and Tab.5 in the main paper. Here, we reorganize the results in the table below for a better understanding. BS refers to the synthetic bootstrap, F2M refers to frame-to-model optimization, and F2F refers to the frame-to-frame optimization. Independently applying synthetic bootstrap and frame-to-model optimization both achieves significant improvements compared with the frame-to-frame baseline, demonstrating the strong effectiveness of our designs. And applying both of them further improves the results, achieving the improvements of $6$ pp over the BS-only model and $3$ pp over the F2M-only model on most metrics. We will refine the descriptions in the future version.\\n\\n| BS | F2F | F2M | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| ------------ | :----------: | :----------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| | $\\\\checkmark$ | | 60.4 | 68.2 | 79.9 | 40.0 | 54.3 | 66.9 | 48.9 | 61.5 | 65.8 |\\n| $\\\\checkmark$ | | | 71.3 | 78.6 | 87.4 | 46.9 | 65.5 | 76.3 | 57.5 | 72.2 | 75.0 |\\n| | | $\\\\checkmark$ | 74.4 | 82.8 | 92.3 | 46.8 | 67.9 | 80.4 | 58.5 | 75.5 | 79.0 |\\n| $\\\\checkmark$ | $\\\\checkmark$ | | 75.2 | 82.5 | 90.3 | 47.4 | 68.3 | 80.5 | 58.8 | 76.5 | 79.4 |\\n| $\\\\checkmark$ | | $\\\\checkmark$ | 77.4 | 84.5 | 92.5 | 50.0 | 70.6 | 82.1 | 61.5 | 77.6 | 80.9 |\\n\\n\\n**Q2: The effectiveness of the neural implicit field-guided mechanism.**\\n\\nThe results have already been shown in the Tab.2 and Tab.4 of the main paper and we put them together in the table below. Under the $50$-frame setting, our method without bootstrap surpasses the previous state-of-the-art method PointMBF by $14$ pp on Rotation Accuracy@$5^{\\\\circ}$, $6.8$ pp on Translation Accuracy@$5\\\\text{cm}$ and $9.6$ pp on Chamfer Distance Accuracy@$5\\\\text{cm}$, showing strong superiority.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :--------------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 60.4 | 68.2 | 79.9 | 40.0 | 54.3 | 66.9 | 48.9 | 61.5 | 65.8 |\\n| F2M-Reg(w/o bootstrap) | 74.4 | 82.8 | 92.3 | 46.8 | 67.9 | 80.4 | 58.5 | 75.5 | 79.0 |\\n| F2M-Reg(full) | 77.4 | 84.5 | 92.5 | 50.0 | 70.6 | 82.1 | 61.5 | 77.6 | 80.9 |\\n\\nWe further show the comparisons under the $20$-frame setting in the table below. As this setting is relatively easy as the frame pairs share larger overlap, our method without bootstrap still outperforms the PointMBF baseline.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :--------------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 96.0 | 97.6 | 98.9 | 83.9 | 93.8 | 97.7 | 92.8 | 97.3 | 97.9 |\\n| F2M-Reg(w/o bootstrap) | 96.8 | 98.9 | 99.8 | 83.6 | 95.2 | 98.7 | 92.1 | 98.2 | 98.8 |\\n| F2M-Reg(full) | 97.6 | 99.1 | 99.8 | 85.5 | 95.8 | 98.8 | 93.1 | 98.4 | 98.9 |\"}", "{\"comment\": \"**Q1: Th influence of the number of the objects in the synthetic dataset.**\\n\\n**A:** Currently, there are $400$ objects per scene. We are currently working on re-rendering the dataset which has $50$ objects per scene and re-training the models. The results will be made available as soon as we finish.\\n\\n**Q2: The contribution of the synthetic dataset.**\\n\\n**A:** We would first note that our contribution lies in the warming-up mechanism using a synthetic dataset to mitigate the sensitivity of our frame-to-model optimization to the model initialization, rather than in the construction of the Sim-RGBD dataset itself. We construct our own dataset instead of using existing synthetic datasets such as Replica as they commonly contain a limited number of scenes, and it is more flexible and easy to build a large dataset at little cost by randomly placing objects in a space. And notably, warming-up with Sim-RGBD shows strong effectiveness in the experiments, which demonstrates the potential that such a simple synthetic dataset could benefit complicated 3D vision tasks, with little extra costs. And we hope this could inspire more future work on this topic.\"}", "{\"comment\": \"**Q1: Evaluation on ScanNet++.**\\n\\n**A:** The table below presents the results of our tests on ScanNet++ iPhone RGB-D sequences under the 20-frame setting. Our method outperforms PointMBF significantly, with a $12.0$ pp improvement in Rotation Accuracy@$5^{\\\\circ}$, a $16.6$ pp increase in Translation Accuracy@$5 cm$, and a $14.2$ pp boost in Chamfer Accuracy@$1 cm$. These results, evaluated on the ScanNet++ dataset, further demonstrate the strong generality of our method to new datasets.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| F2M-Reg | 95.7 | 98.2 | 99.4 | 79.7 | 92.9 | 97.7 | 88.3 | 96.9 | 98.0 |\\n| PointMBF | 83.7 | 90.3 | 96.5 | 63.1 | 78.0 | 89.2 | 74.1 | 86.8 | 89.9 |\\n\\n**Q2: The performance issue.**\\n\\n**A:** The performance issues raised by other reviewers stem from comparisons between the PointMBF and the F2M approach without warm-up. As emphasized in the paper, our frame-to-model approach is sensitive to the initialization of the registration model, which is why we designed the synthetic warm-up mechanism to mitigate this challenge. To further highlight this, we compared the F2F and F2M approaches with synthetic warm-up in **Tab. 11** of the main paper, which clearly demonstrates the superiority of our F2M approach in achieving robust and accurate registration performance.\"}", "{\"metareview\": \"The paper proposes an unsupervised method for robust RGB-D registration without ground-truth pose supervision where a frame-to-model approach is employed. The proposed method begins with pre-training the model on a synthetic dataset, Sim-RGBD, with ground-truth poses, and subsequently fine-tunes it on real-world datasets without ground-truth poses by leveraging a neural implicit field as a 3D scene representation for pose optimization.\", \"additional_comments_on_reviewer_discussion\": \"The usage of a neural implicit field as a global scene model to capture broader scene-level information is nice to handle complex conditions, such as low overlap. The proposed method demonstrates high performance compared to baseline methods on two real-world RGB-D datasets. On the other hand, the reviewers raised concerns regarding insufficient validation of the proposed method, lacking analysis of experimental results, undetailed explanations about the proposed method. The authors have provided additional experiments and their analysis to address the raised concerns. Further discussion reveals the improvement mostly comes from the extra training data and improvement is significant for 50-frame apart setting while 20-frame apart case is not. In the end, the final scores of the reviewers are splitting: 2 x BR, A, and BA. The remaining concerns by the negative reviewers are still insufficient validation and technical novelty and depth. Evaluations on ScanNet and 3DMatch are not thorough. As the authors have admitted, the performance gain from Sim-RGBD warm-up across two datasets is inconsistent. The authors reason that (1) domain gap because ScanNet is used for testing in all the experiment, and (2) difference of dataset size. The domain gap can be easily evidenced by swapping the roles of ScanNet and 3DMatch in the experiments, which is missing. New experiment on TUM RGB is only for 50-frame setting while that on ScanNet++ is only for 20-frame setting. Why not both settings on both datasets? With further results, the generalization ability of the proposed method under high/low overlap across datasets could have been discussed. Applicability to unordered data (Reviewer bx3G\\u2019s comment) is not evidenced. The influence of the dataset size for worming-up is newly provided; however, the performance behaviors are not aligned with the size (up and down). Nevertheless, the authors observe that increasing the dataset size increase the performance although improvement is not significant. Regarding the comparison between F2M and F2F with warm-up, the performance difference under 50-frame setting significant while that under 20-frame setting is negligible. In-depth discussion on this observation is not addressed. As seen, the validation of the proposed method is not thorough yet. Reviewer 4zeD acknowledges the effectiveness in correspondence matching and pose estimation under large viewpoint change (50 frames), though under common viewpoint change setting in existing work (20 frames), the performance is on par with SOTA, and still has major concern on the technical novelty and depth. Reviewer 4zeD commented as below in the AC-reviewer discussion phase:\\n\\n1.\\tThe main contribution of the work is leveraging a scene and pose optimization method (like SLAM, SFM with NeRF) to provide the pose label for the correspondence learning of RGBD image pairs. This strategy is one of main solutions to acquire GT for correspondence learning, for example using COLMAP to get pose and matching points for image pairs. This work replaces the COLMAP with more recent NeRF based SfM method. Compared with SOTA methods e.g. PointMBF which only use mutual RGB or depth consistency between image pairs without knowledge about the scene in COLMAP or NeRF based SFM, these GT for SfM is actually oracle.\\n\\n2.\\tThe other concern is its claim of the contribution of the synthetic data. I acknowledge that using synthetic data will definitely help the optimization of NeRF based SfM, but I cannot see in which sense it fills the gap of the existing synthetic or real dataset. Pretraining in synthetic data with randomly place objects for point matching is also quite common [1].\\n[1] 2018 ECCV RAFT: Recurrent All-Pairs Field Transforms for Optical Flow\\n\\nAC shares with Reviewer4zeD\\u2019s observation that the meaningful improvement is only for 50-frame setting (low overlap). Then, performance gain under different low overlap scenarios should be systematically evaluated to confirm this observation further, leading to more proper claim rather than that the proposed method overcomes SOTAs. More in-depth arguments to clarify the real contributions of this work should be addressed to convince the reviewers. AC also finds that discussion between the authors and the reviewers has not been adequately reflected in the revised manuscript, meaning the final version will be largely different from the current version, and another round of reviews will be required. On balance, AC finds the remaining concerns to outweigh the current technical contributions, and agrees that the paper would benefit from more work. For this, the paper cannot be accepted to this conference.\"}", "{\"comment\": \"**Q5: Detail on the initial poses in the fine-tuning stage.**\\n\\nWe omit the original constant movement-based pose initialization in Co-SLAM and directly use the pose predicted by our model as the initial pose. In this task, the distance between two frames are more distant than a traditional SLAM task, i.e., the frame pairs are $20$ frames apart in our case, so the constant movement assumption does not hold anymore. And we then optimize the predicted pose in the tracking and the mapping stages, which are used to train the registration model.\\n\\n**Q6: Limitation.**\\n\\nIn spite of the state-of-the-art performance, there are still some limitations in our method. (1) Our method cannot be directly used in outdoor scenes. On the one hand, the excessive foreground and background depth variations can cause the subscene to be incorrectly constructed. On the other hand, it is difficult to encode a large open scene with one neural implicit field. A possible solution to this problem is to leverage multiple local neural fields like block-NeRF. We will leave this as a future work. (2) Our method requires relatively long training time due to the optimization of the neural field. We can reduce the tracking iterations to balance the accuracy and the efficiency. And leveraging more efficient neural representations such as 3D Gaussian is also a promising research direction. We will add the discussions in the future version.\\n\\n[1]: Mohamed El Banani, Luya Gao, and Justin Johnson. Unsupervisedr&r: Unsupervised point cloud registration via differentiable rendering. CVPR 2021, pp. 7129-7139.\\n\\n[2]: Mohamed El Banani and Justin Johnson. Bootstrap your own correspondences. ICCV 2021, pp. 6433-6442.\\n\\n[3]: Ziming Wang, Xiaoliang Huo, Zhenghao Chen, Jing Zhang, Lu Sheng, and Dong Xu. Improving rgb-d point cloud registration by learning multi-scale local linear transformation. ECCV 2022, pp. 175-191.\\n\\n[4]: Mingzhi Yuan, Kexue Fu, Zhihao Li, Yucong Meng, and Manning Wang. Pointmbf: A multi-scale bidirectional fusion network for unsupervised rgb-d point cloud registration. ICCV 2023, pp. 17694-17705.\\n\\n[5]: Mildenhall B, Srinivasan P P, Tancik M, et al. Nerf: Representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2021, 65(1): 99-106.\\n\\n[6]: Wang P, Liu L, Liu Y, et al. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction[J]. arXiv preprint arXiv:2106.10689, 2021.\"}", "{\"comment\": \"**Q3: About the performance of w/o warm-up registration model.**\\n\\n**A:** As already described in the paper, the frame-to-model approach is more sensitive to the model initialization as it needs to track the poses of more frames to optimize a neural implicit field. This sensitivity prompted the design of our synthetic warm-up method to address the issue. For more comparisons, we apply the synthetic warming-up on both F2F and F2M approaches on both datasets in the following tables: the first table corresponds to the 50-frame setting, and the second table to the 20-frame setting. These comparisons highlight the superiority of the F2M approach over the F2F approach, demonstrating its effectiveness in training the registration model. \\n\\n| | Train Set | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :---------------: | :-------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| F2F(with warm-up) | 3DMatch | 71.3 | 79.2 | 87.7 | 43.9 | 64.6 | 77.0 | 55.5 | 72.7 | 76.0 |\\n| F2M(with warm-up) | 3DMatch | 72.6 | 81.1 | 91.1 | 44.6 | 65.9 | 78.5 | 56.1 | 73.6 | 77.1 |\\n| F2F(with warm-up) | ScanNet | 75.2 | 82.5 | 90.3 | 47.4 | 68.3 | 80.5 | 58.8 | 76.5 | 79.4 |\\n| F2M(with warm-up) | ScanNet | 77.4 | 84.5 | 92.5 | 50.0 | 70.6 | 82.1 | 61.5 | 77.6 | 80.9 |\\n\\n| | Train Set | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :---------------: | :-------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| F2F(with warm-up) | 3DMatch | 96.1 | 98.5 | 99.6 | 81.7 | 94.5 | 98.4 | 91.2 | 97.7 | 98.5 |\\n| F2M(with warm-up) | 3DMatch | 96.3 | 98.7 | 99.7 | 81.9 | 94.7 | 98.5 | 91.4 | 97.9 | 98.6 |\\n| F2F(with warm-up) | ScanNet | 97.3 | 99.0 | 99.7 | 83.7 | 95.5 | 98.7 | 92.5 | 98.3 | 98.8 |\\n| F2M(with warm-up) | ScanNet | 97.6 | 99.1 | 99.8 | 85.5 | 95.8 | 98.8 | 93.1 | 98.4 | 98.9 |\"}", "{\"comment\": \"We thank all reviewer for their valuable comments, recognizing the interestring problem being solved(Reviewer Sxy2), the technical soundness of the method(All Reviewers), the good writing quality (Reviewer 4zeD), and intuitive illustration (Reviewer Sxy2, Reviewer Qbae), the comprehensive experiments (All Reviewers), the outstanding performance (All Reviewers), and the application value(Reviewer 4zeD). We will address each reviewer's concerns individually below.\"}", "{\"comment\": \"**Q1: The claim about 'unsupervised RGB-D Registration' / 'unsupervised learning'**\\n\\nFollowing previous work [1] [2] [3] [4], unsupervised RGB-D registration aims to training the registration model without the supervision of the ground-truth poses. This line of work usually first computes the pseudo poses between the point cloud pairs, and trains the model based on the pseudo poses. Our method also follows this paradigm, with a novel frame-to-model optimization method to obtain accurate pseudo poses. And we propose a model initialization method based on synthetic data. On the other hand, zero-shot learning focuses on predicting novel classes or domains without the corresponding data, which is different from this task.\\n\\n**Q2: More evaluations on different datasets.**\\n\\nWe compare our method with PointMBF on TUM RGB-D under the $50$-frame setting. And our method surpasses PointMBF by $9.4$ pp on Rotation Accuracy@$5^{\\\\circ}$, $12$ pp on Translation Accuracy@$5\\\\text{cm}$, and $9.5$ pp on Chamfer Accuracy@$1\\\\text{cm}$. These results have proven the strong generality of method to new datasets. We will add these results in the future version.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :-----------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 85.9 | 97.9 | 100.0 | 66.5 | 84.3 | 98.4 | 69.6 | 86.4 | 91.6 |\\n| F2M-Reg(full) | 95.3 | 96.9 | 100.0 | 78.0 | 94.8 | 99.5 | 79.1 | 95.3 | 96.9 |\\n\\n**Q3: Work on the unordered data.**\\n\\nIt is common practice for unsupervised RGB-D registration methods [1] [2] [3] [4] to train the registraion model with RGB-D sequences. The main reason here is to guarantee that there is reasonable overlap between two frames, which is difficult to achieve with unordered data. It is the overlap that counts, rather than the orderness. And this is the same for Co-SLAM part. Theoretically, if the unordered data are organized to guarantee the overlap between frames, our method should also work. Besides, our method only leverages RGB-D sequences during training, and we do not rely on RGB-D sequences during inference and thus can handle various multi-view RGB-D data in applications.\\n\\n**Q4: The reason on why F2M approach superior to F2F approach.**\\n\\nNeural implicit field still has challenges in handling multi-view inconsistency, especially in extreme cases. However, compared to the traditional rasterization process used in the frame-to-frame methods, neural implicit field is well-known for its stronger capability to handle multi-view inconsistency as noted in NeRF [5], NeuS [6], etc. By encoding the view direction information, the neural implicit field can render more geometrically and photometrically consistent images, which helps optimize a more accurate pose to train the registration model. On the contrary, the frame-to-frame method only project one frame into the other and compare the consistency between two images, which is more easily affected by the multi-view inconsistency.\"}", "{\"title\": \"If existing dataset performs better, why this synthetic dataset\\uff1fand why this can be a contribtion?\", \"comment\": \"Thanks authors for the experiments for datasets.\", \"i_have_two_questions_remains\": \"1) My question is about \\\" What\\u2019s is the influence of number objects of ShapeNet on the final metrics?\\\" while the results is about number of scenes. Could you please provide the number of the objects for the scenes used in the 20 -80 scenes? \\n\\n2) My other concerns is if existing dataset performs better (according to results), why this synthetic dataset and why this can be a contribution? In which sense, this novel synthetic dataset fills the gap of these existing dataset\\uff1f\"}", "{\"title\": \"Which scenes were evaluated in ScanNet++\", \"comment\": \"Thanks for the additional experiments, I would like to ask which scenes are evaluated on ScanNet++ dataset. I assume the presented results are averaged outcomes on several scenes.\"}", "{\"comment\": \"**Q2: About the results under the 20-frame setting.**\\n\\n**A:** The $20$-frame setting is more like a *easy* setting instead of a *common* one as the viewpoint changes are small, and thus the performance tends to be saturated. However, in real applications, the $20$-frame setting does not always hold. For example, in the scenes with fast motion or low frame rate, the frame-to-frame methods could fail but our method still provides accurate pose estimations.\\n\\nAnd further we further report the results on 3DMatch in the table below.\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(5) | Chamfer(10) | Chamfer(45) |\\n| :--------------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF(w/o warm-up) | 94.6 | 97.0 | 98.7 | 81.0 | 92.0 | 97.1 | 91.3 | 96.6 | 97.4 |\\n| F2M (w/o warm-up) | 95.7 | 98.6 | 99.7 | 79.8 | 94.2 | 98.5 | 90.5 | 97.8 | 98.5 |\\n| PointMBF(with warm-up) | 96.1 | 98.5 | 99.6 | 81.7 | 94.5 | 98.4 | 91.2 | 97.7 | 98.5 |\\n| F2M (with warm-up) | 96.3 | 98.7 | 99.7 | 81.9 | 94.7 | 98.5 | 91.4 | 97.9 | 98.6 |\\n\\n**Q3: About the low-overlap cases.**\\n\\n**A:** First of all, we would note that our method and PointMBF share the similar registration model, but differ in the training methods, i.e., frame-to-model v.s. frame-to-frame. It is one of the most disadvantages of the frame-to-frame methods that they cannot estimate accurate poses for the low-overlap pairs during training, which leads to suboptimal convergence of the model. On the contrary, our frame-to-model can provide better poses for the low-overlap pairs, which contributes to a more robust model, as shown in Tab.2 of the main paper. These results explicitly demonstrate the superiority of our method. Morever, we compare the poses during training estimated by both methods on pairs with rotation > $20^{\\\\circ}$ on ScanNet scene0000_00 in the below table. It can be seen that our F2M method achieves obviously better pose optimization during training, providing the registration model with better supervised signals.\\n\\n| | Rot(mean) | Rot(median) | Trans(mean) | Trans(median) |\\n| :--: | :-------: | :---------: | :---------: | :-----------: |\\n| F2F | 4.9 | 2.2 | 13.0 | 6.0 |\\n| F2M | 2.9 | 0.8 | 7.0 | 3.0 |\\n\\n**Q4: About the differences between ScanNet and 3DMatch.**\\n\\n**A:** We show the statistics of the rotations, the translations, and the overlap ratios between the training pairs for both datasets. It can be seen that the two datasets do not differ much in data distribution. However, the size of the two datasets shows significant differences, i.e., 1045 vs 71, which could be the main cause of the performance differences.\\n\\n| | Rot(mean) | Rot(median) | Trans(mean) | Trans(median) | Overlap(mean) | Overlap(median) |\\n| :-----: | :-------: | :---------: | :---------: | :-----------: | :-----------: | :-------------: |\\n| ScanNet | 11.9 | 11.0 | 18.2 | 16.0 | 63.6% | 0.7 |\\n| 3DMatch | 10.2 | 8.4 | 17.1 | 14.4 | 65.1% | 0.7 |\"}", "{\"title\": \"Summary of the Disccusion\", \"comment\": \"Dear Chairs and Reviewers,\\n\\nHope this message finds you well. \\n\\nAs the discussion period comes to a close, we would like to provide a brief summary of our discussions with the reviewers for reference. First and foremost, we sincerely thank all the reviewers for their insightful comments and valuable suggestions.\", \"we_summarize_the_main_concerns_raised_by_the_reviewers_along_with_our_corresponding_responses_as_follows\": [\"**About the performance between F2M-Reg w/o warm up and PointMBF**. As detailed in the main paper, the frame-to-model approach is highly sensitive to model initialization, which we address with the synthetic warm-up mechanism. Additional results from various training datasets (ScanNet, 3DMatch) and evaluation settings (20/50-frame apart) further demonstrate the superiority of our method.\", \"**About the writing.** We offer a deeper explanation of the pipeline and its individual modules, including further discussion on the differences between zero-shot learning and unsupervised registration in our method. We also clarify and modify ambiguous terms and provide a more detailed description of the advantages of the frame-to-model approach over the frame-to-frame approach, supported by extensive experimental evidence.\", \"**About the usage of Co-SLAM like approach in the fine-tuning stage.** This issue extends to the question of whether training datasets need to be sequential and the scope of our method. Notably, our approach can train the registration model under low-overlap conditions, where the assumption of constant movement no longer holds. The registration model then provides the initial pose necessary for constructing the neural implicit field.\", \"**About more ablation studies.** We attribute the improvement to both warming up and fine-tuning stages. To identify the specific contributions of each, we conduct ablation studies on warming-up dataset size, object count per scene, model generalizability, and rendering strategies. Furthermore, we evaluated our method and PointMBF under the 20-frame-apart and 50-frame-apart settings.\", \"---\", \"Based on the discussion with reviewers, we also present a brief summary of our paper as follows:\", \"**Observation:** Existing frame-to-frame methods usually adopt a pairwise training strategy based on differentiable rendering, which suffers from poor multi-view consistency due to factors such as lighting changes, geometry occlusion and reflective materials.\", \"**Solution:** F2M-Reg addresses these issues by introducing a frame-to-model optimization framework, which leverages the neural implicit filed as a global model of the scene and use the consistency between the input and the re-rendered frames for pose optimization. To facilitate the neural field optimization, F2M-Reg warms up the registration model on synthetic dataset, Sim_RGBD.\", \"**Results:** F2M-Reg outperforms the state-of-the-art counterparts on two popular indoor RGB-D datasets, ScanNet and 3DMatch. Also F2M-Reg achieves significantly better performance than previous methods in more challenging scenarios with lower overlap or severe lighting changes.\", \"**Highlights:** Focusing on unsupervised RGB-D registration task, our work has the following highlights:\", \"**Frame-to-model optimization framework:** The infusion of global reconstruction information enhances the reliability of re-rendering, which fortifies the robustness of registration model.\", \"**Synthetic warm-up mechanism:** The warming up mechanism provide high-quality initial poses for neural implicit field optimization.\", \"Thank you once again for your efforts in reviewing and discussing our work. We greatly appreciate all the valuable feedback that contributed to enhancing our submission.\", \"Sincerely\", \"Authors of Submission 7404\"]}", "{\"comment\": \"**Q1: The effectiveness of the warming up dataset.**\\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :--------------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 96.0 | 97.6 | 98.9 | 83.9 | 93.8 | 97.7 | 92.8 | 97.3 | 97.9 |\\n| F2M-Reg(w/o bootstrap) | 96.8 | 98.9 | 99.8 | 83.6 | 95.2 | 98.7 | 92.1 | 98.2 | 98.8 |\\n| F2M-Reg(full) | 97.6 | 99.1 | 99.8 | 85.5 | 95.8 | 98.8 | 93.1 | 98.4 | 98.9 |\\n\\nIn the table above, we compare PointMBF, F2M-Reg w/o bootstrapping, and full F2M-Reg under the $20$-frame setting on ScanNet. Our method still outperforms PointMBF even without bootstrapping, especially it can reduce extremely erroneous registrations. We further show the results under the $50$-frame setting (see the table below) with more severe multi-view inconsistency due to light variations and geometric occlusion. In this setting, our improvements are more significant, i.e., $14$ pp on Rotation Accuracy@$5^{\\\\circ}$, $6.8$ pp on Translation Accuracy@$5\\\\text{cm}$, and $9.6$ pp on Chamfer Distance Accuracy@$5\\\\text{cm}$. These results further demonstrate the superiority of our frame-to-model design. \\n\\n| | Rot(5) | Rot(10) | Rot(45) | Trans(5) | Trans(10) | Trans(45) | Chamfer(1) | Chamfer(5) | Chamfer(10) |\\n| :--------------------: | :----: | :-----: | :-----: | :------: | :-------: | :-------: | :--------: | :---------: | :---------: |\\n| PointMBF | 60.4 | 68.2 | 79.9 | 40.0 | 54.3 | 66.9 | 48.9 | 61.5 | 65.8 |\\n| F2M-Reg(w/o bootstrap) | 74.4 | 82.8 | 92.3 | 46.8 | 67.9 | 80.4 | 58.5 | 75.5 | 79.0 |\\n| F2M-Reg(full) | 77.4 | 84.5 | 92.5 | 50.0 | 70.6 | 82.1 | 61.5 | 77.6 | 80.9 |\\n\\n\\n**Q2: 50 frames apart in the evaluation phase.**\\n\\n\\nThe $50$-frame setting aims to evaluate the performance under low overlap, which is common in real applications with fast motion or low frame rate. It is more challenging due to the severe interference of the non-overlap region. In the evaluation, we exhaustively include all consecutive pairs which are $50$ frames apart, e.g., frame $1$ and $51$, frame $51$ and $101$, etc. We **do not exclude** any pairs even if they have extremely low overlap or no overlap. This further increase the difficulty of this setting. In the paper, we do not show the mean and the median errors in Tab.2 of the main paper, which we show below. The table below show the mean and median errors of the registration model trained on ScanNet. The results are clearly worse than those under the $20$-frame setting, but our method still outperforms PointMBF by a large margin.\\n\\n| | Rotation Mean | Rotation Median | Translation Mean | Translation Median | Chamfer Mean | Chamfer Median |\\n| :-----------: | :-----------: | :-------------: | :--------------: | :----------------: | :----------: | :------------: |\\n| PointMBF | 19.1782 | 2.2901 | 38.0614 | 5.9361 | 85.7505 | 0.6877 |\\n| F2M-Reg(full) | 15.5138 | 1.9196 | 30.0634 | 5.0210 | 73.8352 | 0.5405 |\"}", "{\"comment\": \"Thanks for the response. It has addressed my concerns. I've upgraded my rating to accept and change presentation from fair to good to reflect this.\"}", "{\"comment\": \"Thanks for your valuable feedbacks and recognition of our work!\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"I really appreciate the comprehensive responses from the authors. Some of my concerns regarding the F2F and F2M, the integration of Co-SLAM, and the training strategies are sufficiently addressed. The authors provide additional experiments on TUM-RGBD and showing their superiority over the baseline method. However, no experiments on the recommended ScanNet++ (Yeshwanth et al.) are provided. Considering the performance issues also highlighted in other reviews, I keep my score unchanged.\", \"a_minor_suggestion\": \"The PDF file can be modified and reuploaded to reflect the revisions. Instead of promising future changes, it is more straightforward to show and highlight the revisions directly.\\n\\n\\nYeshwanth, Chandan, et al. \\\"Scannet++: A high-fidelity dataset of 3d indoor scenes.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\"}", "{\"title\": \"Seems only improving on the choosen ScanNet dataset\", \"comment\": \"Thanks for the author's response, but it still does not fully address my concern:\\n1. The summarized improvement on the 50-frame setting only works on the ScanNet dataset, but according to Tab. 2 and Tab. 4, the performance of F2M on 3DMatch is lower than PointMBF on Rotation 5 and Translation 5. \\n2. On the more common 20-frame setting, the improvement of F2M w/o bootstrapping is marginal on the ScanNet dataset. The comparison on 3DMatch is not reported.\\n3. PointMBF is not designed for low-overlap scenarios like 50 frames apart, but this is not clearly discussed in the paper.\\n\\nThe differences between the two datasets should be discussed and analyzed explicitly.\"}", "{\"summary\": \"The paper introduced a method for RGB-D registration using frame-to-model optimization. The pipeline incorporates a pretrained registration model suing PointMBF and Co-SLAM for point cloud registration. The experiments on real-world ScanNet and 3DMatch datasets present the superiority of the proposed method. The ablation studies investigated the effectiveness of each components.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The writing is comprehensive and easy to follow\", \"The proposed method demonstrates high performance compared to baseline methods on two real-world RGB-D datasets.\", \"The ablation study on the main text and supplementary materials are thorough and effectively showcases the efficacy of the proposed framework.\"], \"weaknesses\": [\"The proposed method should not be characterized as unsupervised learning as it claimed. Instead, it involves a registration model that has been pretrained on a synthetic RGB-D dataset and subsequently adapted to real-world RGB-D scenes during inference. Strictly speaking, the method aligns more precisely with a zero-shot learning approach. Please ensure the claims are accurately represented.\", \"The experiments could benefit from utilizing more recent datasets. The ScanNet-v2 dataset, while widely used, is dated and known for sensor noise that results in unreliable depth maps with many gaps containing zero or infinity values. More accurate and high-resolution RGB-D streams are available in newer datasets like ScanNet++ and TUM RGB-D. Conducting additional experiments on these recent datasets is recommended.\", \"The proposed pipeline incorporates Co-SLAM, which utilizes a tracking system for camera pose estimation, requiring that the input RGB-D stream strictly follow a time series. However, this requirement shows limitations for general RGB-D registration tasks that handle multi-view data, where the input does not necessarily adhere to a time-series format. In scenarios involving unordered data, the effectiveness of a SLAM system may be compromised.\"], \"questions\": [\"The paper states that the frame-to-frame framework experiences difficulties in maintaining multi-view consistency due to issues like lighting variations, geometric occlusions, and reflective surfaces; however, NeRF encounters similar challenges under these conditions. It should be clarified how the frame-to-model approach addresses these limitations in comparison to the frame-to-frame method.\", \"In the proposed framework, initial poses are generated using a bootstrap method trained on synthetic RGB-D data. Considering that Co-SLAM, utilized for tracking within the proposed framework, also incorporates its pose initialization strategy based on constant movement assumption, how do these initial poses from the bootstrap method align or integrate with the initial pose assumptions in Co-SLAM? Specifically, please detail any adaptations or refinements made to ensure consistency between the poses initialized by F2M-Reg and the subsequent pose tracking performed by Co-SLAM,\", \"Please also discuss the limitations of the proposed methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concern.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5FXKgOxmb2
MAGNet: Motif-Agnostic Generation of Molecules from Scaffolds
[ "Leon Hetzel", "Johanna Sommer", "Bastian Rieck", "Fabian J Theis", "Stephan Günnemann" ]
Recent advances in machine learning for molecules exhibit great potential for facilitating drug discovery from in silico predictions. Most models for molecule generation rely on the decomposition of molecules into frequently occurring substructures (motifs), from which they generate novel compounds. While motif representations greatly aid in learning molecular distributions, such methods fail to represent substructures beyond their known motif set, posing a fundamental limitation for discovering novel compounds. To address this limitation and enhance structural expressivity, we propose to separate structure from features by abstracting motifs to scaffolds and, subsequently, allocating atom and bond types. To this end, we introduce a novel factorisation of the molecules' data distribution that considers the entire molecular context and facilitates learning adequate assignments of atoms and bonds to scaffolds. Complementary to this, we propose MAGNet, the first model to freely learn motifs. Importantly, we demonstrate that MAGNet's improved expressivity leads to molecules with more structural diversity and, at the same time, diverse atom and bond assignments.
[ "graph generative models", "2d molecules" ]
Accept (Spotlight)
https://openreview.net/pdf?id=5FXKgOxmb2
https://openreview.net/forum?id=5FXKgOxmb2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykuitkm0MY", "xhphuCl26v", "wo8xJAACXn", "vKtK28xP6a", "trO6NI6lVp", "s5kH6I6Bsg", "piqJPhqGAN", "hn253hFrNj", "hdrQqizxbR", "gTJJlnKpm7", "fGhfi8qyWu", "df6e9uBTaR", "V8kdbFf5ag", "UMLjeoMU9U", "P0uTr68r1Y", "NMdjFqCJZC", "L4b1sKOHwM", "KREs3qMIvY", "IGmBEme28e", "I6bgIcGXw2", "I2lxxjobrC", "G9ySh84QaS", "FvaNn4Xmim", "Ck2EkRzD21", "3LLthf69X4" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733074396684, 1737523816396, 1732268919725, 1732532816478, 1730399102835, 1732211029869, 1732114599533, 1732784045775, 1732784081263, 1730275207662, 1732115138828, 1732115785016, 1730729184623, 1734541985764, 1732114001619, 1732784118231, 1732116478887, 1732862033844, 1732557311964, 1732474385133, 1732526353408, 1730544514117, 1732260198374, 1732648491265, 1732417921467 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_hxc4" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_w2Yn" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_w2Yn" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_8uqV" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_hxc4" ], [ "ICLR.cc/2025/Conference/Submission7094/Area_Chair_Zd3Q" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_8uqV" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Authors" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_Ry7i" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_Ry7i" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_8uqV" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_hxc4" ], [ "ICLR.cc/2025/Conference/Submission7094/Reviewer_8uqV" ] ], "structured_content_str": [ "{\"comment\": \"&nbsp;\\n\\nWe thank the reviewer for their engagement and are **pleased that we successfully addressed their main concern about how MAGNet generates complex structures, such as fused rings**. We appreciate that this is reflected in an increased score. \\n\\n&nbsp;\\n\\nFurthermore, we acknowledge the reviewer\\u2019s criticism regarding MAGNet\\u2019s performance in some metrics, such as FCD or SA. We would like to take the chance and put it in the global context of our work:\\n \\n- In our manuscript, we emphasise the insufficiency of metrics like FCD, a perspective supported by literature in Computer Vision. Our work provides a new approach to enhance structural expressivity, which cannot be adequately captured by FCD scores, as demonstrated in our analysis in Sec.5.2, cf. L425 ff. \\n- Singling out the FCD metric conveys an incomplete picture of our work. We demonstrate how MAGNet improves upon baselines throughout our experiments, cf. Figs.1/3/4.\\n- Finally, we do not consider the drop in the SA score a significant limitation but our proposed solution regarding synthesizabilty, as discussed in the limitation section (L285 ff.), more relevant for practical scenarios.\\n\\n&nbsp;\\n\\nWe are grateful for the constructive feedback and the discussion with the reviewer. \\n\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"&nbsp;\\n\\nThank you for the prompt response and engaging in a discussion with us. We believe there has been a fundamental misconception regarding Fig. 1 and apologise that we misunderstood the reviewer's main concern. In the following, we would like to clarify this essential aspect of our work. \\n\\n&nbsp;\\n\\n> Based on my understanding, your method still struggles to generate complex fused ring systems as shown in Figure 1a, correct?\\n\\n- That is indeed not correct. **Quite the opposite, with our approach complex scaffolds such as fused rings become accessible**.\\n- In Fig.1a we only highlight the shortcomings of existing motif-based methods (in particular MoLeR) on the reconstruction of approved drugs with complex scaffolds. This presentation was meant to motivate the necessity for rethinking motif-based molecular generators. \\n- In App.D.2 (Figure 10), we demonstrate how MAGNet *can* decode the complex scaffolds shown in Fig.1a. \\n- We also explicitly evaluate the decoding capabilities of such scaffolds quantitatively in Sec.5.1, Fig.3 in particular. **What is referred to as uncommon cyclic scaffolds are exactly those complex scaffolds the reviewer mentions and we improve upon baselines significantly**.\\n- It is important to highlight that our vocabulary is much more expressive than existing ones, containing scaffolds of structures like Pentacene, Triphenylene and Benzo[a]pirene, and generally scaffolds of fused rings with up to 6 rings, as well as bicyclic scaffolds. \\n- **To avoid future misunderstandings and if the reviewers finds this beneficial, we could imagine to swap the presentations of Fig.1a and Fig.10**. This change would aid in showing what MAGNet can do in the main part of the manuscript and highlight limitations of existing works only in the appendix.\\n\\n&nbsp;\\n\\nWe sincerely hope that this clears up the misunderstanding regarding MAGNet\\u2019s capabilities and our contributions. In light of this clarification, we would appreciate if the reviewer reconsiders their evaluation and look forward to further discussion.\"}", "{\"comment\": \"Thank you for your response, and for extending the discussion on various aspects I mentioned.\\n\\nAs for generation speed, I took a look at the numbers you shared in the reply to Reviewer 8uqV. It does seem like your method is around 2x faster than MoLeR (which is one of the faster autoregressive methods). I noticed the timings are per 10 molecules, and I wonder how batching affects this. Often (e.g. during optimization), one wants to simultaneously decode hundreds of molecules, allowing for batching and parallelization. Does the speed comparison change when decoding 100+ molecules at a time?\"}", "{\"summary\": \"In this work, the authors aim to address a fundamental limitation of fragment-based molecular generators. The vocabulary of such models is defined as the union of common molecular fragments, obtained from datasets of known molecules, and individual atoms, which can be used to re-generate motifs that would not be listed under the available fragments. The authors argue that this choice of vocabulary creates an inherent tradeoff in the expressivity of the model. On the on hand, including more fragments quickly increases the vocabulary of the model, with the number of motif variation increasing exponentially with their size. On the other hand, learning to model missing fragments from individual atoms is a challenging task requiring even more training data and often leading to unrealistic molecular motifs.\\n\\nAs a solution, the authors propose a coarse-to-fine-grained molecular generation paradigm centred around molecular scaffolds, rather than fragments, as the basic building block of the model's vocabulary. A single scaffold implicitly captures many similar fragments, allowing for a relatively small vocabulary size while retaining expressivity. The proposed model, MAGNet, a VAE-based molecular generator, operates on this multi-level factorisation paradigm, by first sampling scaffolds and only then specifying the atomic composition of the scaffolds, their joints and the leaf nodes as a successive step. They evaluate this approach on several benchmarks against a variety of baselines and show that the proposed method, with its more expressive vocabulary, can reliably generate complex molecular motifs, in addition to allow for latent code optimisation and interpolation.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"In general, I found the paper interesting and very well written. It clearly identifies a limitation of current fragment-based molecular generators in terms of expressivity, and present a well motivated solution based on a novel factorisation paradigm. I believe this is a good paper which brings significant contributions to fragment-based molecular generation approaches and for this reason I recommend that the paper should be accepted. I suggested some clarifications (see below) which I think would increase the clarity of the paper and complement the discussion on the limitations of the method and the general positioning of fragment-based and one-shot graph molecular generators.\", \"a_few_specific_notes\": \"1. The Related Work section is exhaustive.\\n2. The proposed method is clearly introduced and detailed. In particular Figures 2 and 6 effectively convey the hierarchy supporting this paradigm.\\n3. The experimental section is well structured and the authors compare to a large array of prior work and using several public datasets. The results effectively support the main claims made in the paper. In particular, the results presented in Figure 3, Figure 4 are compelling.\", \"weaknesses\": \"1. One of the main limitation of fragment-based methods is the absence of synthesizability considerations in the framework design. This significantly limits the applicability of such methods since, to be tested in physical and biological assays, the proposed molecules either require individual and expansive custom synthesis plans or have to be replaced by available analogs, thus drastically under-utilizing the expressivity of the model. It would be interesting to further discuss the limitations of the proposed method w.r.t. synthesizability in the manuscript.\\n2. On line 371: \\\"We do not report Novelty and Uniqueness, as almost all evaluated models achieve 100% on these metrics.\\\" if this claim is made, then the numbers should be presented (at least in Appendix). Same for the mention right after: \\\"For the baselines DiGress, SM-LSTM, and CharVAE, which are not able to achieve 100% Validity, we sample until we obtain 10^4 valid molecules\\\", it could be informative to include these numbers in appendix (validity rate for each method).\\n3. It would be useful to specify a bit more clearly how benchmarking on Guacamol is executed (lines 301-304 and 309-311). Is the model trained on ZINC and then evaluated on some reference set defined by Guacamol (Chembl)? Or is the model trained on Chembl molecules and compared to a test set also defined in Guacamol? While the provided references are useful, the description of the experiments should be standalone in the paper.\\n4. In the goal-directed evaluations, it would be interesting to compare MAGNet with methods specifically aimed at goal-directed molecular design such as RL and GFlownet based methods.\\n\\n### Elements worth clarifying\\n\\n1. Figure 1 could be made clearer by further explaining parts a), b) and c) in the caption.\\n2. The factorisation from graph to scaffold graph, and scaffold graph to molecular graphs, described in Section 3, would be clearer with a supporting Figure.\\n3. The main baselines (PS-VAE, MoLeR and MiCaM) could be described in greater details.\\n4. I found the discussion on novel conditioning capabilities very interesting (lines 469-476), however, even with this in mind, it is not clear to me what Figure 5 is showing or how it supports these claims. I think this figure could be improved to better support this discussion.\\n\\n### Minor comments:\\n\\n1. Typo on line 155, factorise*\\n2. Line 150: not sure that App C.6 is the intended link here (or how it relates to the sentence)?\\n3. Typo on line 302: to evaluate*\\n4. Error in Figure 3: based on the text and the caption itself, it seems to be that the graph columns B and C are mixed up in Figure 3.\\n5. I did not find the caption of Table 1 very natural to read. I would suggest simply specifying in parenthesis what underline and bold mean in the table, as opposed to underlining and bolding their description in the caption.\", \"questions\": \"1. My understanding of the experiments presented in Figure 1-a, Figure 3 and Figure 9 carried out (Section 5.1) is that the fragment-based approaches fail to reconstruct complex motifs that are absent from their fragment vocabulary by using atom-based tokens only. In contrast, MAGNet contained these structures in its scaffold vocabulary. Why haven't both methods used the same dataset to construct their fragment/scaffold vocabularies, without limiting fragment-based methods to only the top-K fragments (i.e. including all fragments). In this case, the modeling task from the fragment-based methods using a vast vocabulary of fragments would be more difficult, but the method wouldn't lack these important fragments such as big rings, preventing them from reconstructing specific molecules. Have experiments on the baselines been carried out by varying the value of \\\"k\\\" in the top-k fragment-based vocabularies to see how vocabulary size trades-off with learning complexity? It seems to me that this might be a point where a more appropriate tuning of the baseline's hyperparameters would make a difference. And if the main advantage of the proposed factorisation is that it removes the need for such tuning when constructing the fragment vocabulary, it would be interesting to discuss these considerations in the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I want to acknowledge the response of the authors and I appreciate the efforts made to clarify certain elements of the paper and the addition of an additional experiments with larger vocabulary.\"}", "{\"comment\": [\"&nbsp;\", \"We thank the reviewer for their detailed feedback and thoughtful insights. We are grateful for the reviewer\\u2019s acknowledgement of scaffolds as an original and valuable proposal, as well as their recognition of the technical execution and soundness of our work. In the following, we would like to address the reviewer\\u2019s main concern: the categorisation of MAGNet as a one-shot (OS) model.\", \"&nbsp;\", \"> One-Shot (OS) terminology\", \"In our manuscript, we aimed to adopt the established terminology to classify different methods. However, we acknowledge that the strict definitions do not perfectly apply to our approach, nor PS-VAE or DiGress. We sincerely appreciate the reviewer highlighting this.\", \"Fundamentally, we argue that there is a difference between (i) the underlying neural network architecture used to generate aspects of the molecule and (ii) the process of molecule generation (sequential or one\\u2013shot). These two are orthogonal concepts in our view.\", \"Regarding (i), MAGNet and PS-VAE for example utilise, among others, recurrent and autoregressive architectures such as RNNs to generate specific, independent components of the molecule, e.g. the set of motifs.\", \"Regarding (ii), methods like MAGNet and PS-VAE divide the molecular graph into distinct parts, e.g. atom and bond types, which are predicted once and define different characteristics of the molecular graph. In contrast, sequential generation builds molecules step-by-step from parts that share the same characteristics, e.g. motifs or single atoms. This distinction is well reflected in the factorisations of these methods:\", \"MAGNET divides $P(G)$ into distinct components scaffolds $S$, their connectivity $A$ and representation $M$, joins $J$, and leaves $L$.\", \"Similarly, PS-VAE, factorises $P(G)$ into nodes (atoms and motifs) and bonds. We classify this method as OS for the same reason, even though it decodes e.g. the nodes with an autoregressive module.\", \"We also consider DiGress, while iterating over the graph generation in many steps, as an OS model, as it does not condition on a partial molecule over the generation, cf. L114-115.\", \"MoLeR, a sequential approach, generates the molecule step-by-step $P(M_i | M_{i+1}, \\u2026, M_1, z)$, attaching motif after motif. Each generation step is designed identically and consists of multiple modules, which are repeatedly called over the process of the generation. Importantly, such methods can condition on the intermediate state of a molecule, which OS methods cannot.\", \"&nbsp;\", \"Although we believe that a better terminology would distinguish between i) attaching nodes or fragments to a partial graph and ii) fully generating specific aspects of a molecule, e.g. the set of motifs, we chose to adopt established terminology from [Zhu et al., 2022] to ensure clarity and maintain consistency. However, we agree with the reviewer that this might lead to misunderstandings and misguide the interpretation of the experimental evaluation.\", \"To avoid misunderstandings, we removed or toned down *all* claims that rely on MAGNet being an OS model:\", \"We updated L185-186 to remove the classification of MAGNet as OS model\", \"We updated L412-413, such that MAGNet is described to perform on par with motif-based approaches\", \"&nbsp;\", \"The classification of the methods in Table 1 was intended to help the reader contextualise the results, as the classification is consistent in itself (MAGNet, PS-VAE and DiGress as OS models), and aims to make up for inconsistencies in the literature. We acknowledge that this discussion needs more room in the paper and we describe the categorisation of methods in more detail in App. C of the updated PDF. However, if the reviewer believes that the classification for this experiment can lead to confusion more than to help the reader, we are happy to remove the classification from this table entirely.\", \"Moreover, we do not consider MAGNet being an OS model to be our main contribution, cf. L74-76 as well as L84-92. To us, leveraging scaffolds instead of motifs as well as our experimental evaluation, which was also highlighted by the reviewer, are the most essential aspects of our work.\", \"&nbsp;\", \"We want to thank the reviewer again for raising their concern and encouraging us to discuss the categorisation of methods in more detail. We hope that our updated manuscript as well as our clarifications have addressed the reviewer\\u2019s remaining concern satisfactorily.\"]}", "{\"comment\": \"Thank you for your comprehensive review and recommendation for acceptance at ICLR. Your guidance on MAGNet's implementation provides a clear deliverable for the final code release, and we deeply value the time and expertise you invested in evaluating our paper.\"}", "{\"comment\": \"We truly appreciate your thoughtful review and support for our paper's acceptance at ICLR. Your insights, regarding method classification in particular, were valuable, and we are grateful for your constructive feedback.\"}", "{\"summary\": \"This paper addresses a critical limitation in substructure-based molecular generative models: the inability to capture the structural diversity in molecular space due to missing complex structures from the motif vocabulary. The authors propose a novel approach that employs a structural scaffold vocabulary, leaving atom and bond types to be predicted by the model. This approach is intended to enrich structural diversity, with specific metrics introduced to highlight the advantages of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper tackles a significant issue in 2D molecular generation, and Figure 1a provides an illustrative example that emphasizes the limitations of current motif-based methods in generating novel molecular structures. This emphasis on structural diversity is both timely and impactful for the field.\\n\\n\\n2. Experimental results demonstrate an increase in generated structural diversity.\", \"weaknesses\": \"1. **Limitations in Generating Novel Molecular Structures**\\n\\n1.1 A primary concern is whether the proposed solution truly resolves the problem of generating novel molecular structures. As highlighted in Figure 1a, the generation of new substructures\\u2014particularly complex fused ring systems\\u2014appears challenging. Both the original motif approach and the scaffold motif proposed in this work seem constrained in their ability to construct unseen complex fused rings because they are hard to piece together using substructures in the vocabulary.\\n\\n1.2 Furthermore, while the scaffold motif approach can potentially reduce vocabulary size, it still requires additional specification of bonds and atom types on rings, which might affect the validity of atom valence of the generated molecules. \\n\\nThus, while the problem raised is compelling, the method appears to partially address, rather than fully resolve, this issue.\\n\\n2. **Writing**\\n\\n2.1 Overlap in Sections 3 and 4: Sections 3 and 4 present overlapping content, which could benefit from a clearer delineation. Specifically, Section 4 should focus more on detailing the network structures (architecture) within each module, and provide an illustration of the model structure. The high-level probability descriptions, already covered in Section 3, could be streamlined here.\\n\\n2.2 Figure Captions: In Figure 3, the captions for panels (b) and (c) appear reversed.\\n\\n2.3 References: The authors should thoroughly review the reference list for consistency. It is full of preprint references. Accepted papers should reference their final publication locations rather than preprints (e.g. MiCaM and ShapeMol, etc). Please also check with repeated links and unusual endings with paper abbreviations (e.g. G-SchNet, JODO, etc.).\\n\\n3. **Experimental Results**\\n\\nWhile structural diversity has been enhanced, some important metrics, such as FCD and SA, appear to drop. Additionally, there is no mention of the validity rate of generated molecules in the table 1.\", \"questions\": \"1.\\nCould the authors provide the **validity** of generated molecules? Are there specific metrics used to ensure that generated molecules are fully **connected**, given that some examples (such as in the bottom row of Figure 11) suggest molecules with broken structures?\\n\\n2. \\nA comparison with baseline methods in terms of **generation speed** would add valuable context for practical deployment. \\n\\n3. \\nCould the authors clarify the **vocabulary size** difference between your approach and baseline models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"&nbsp;\\n\\nWe are grateful for the reviewer's insightful feedback and are pleased with the positive comments on our work. In the following, we would like to clarify the remaining questions and concerns.\\n\\n&nbsp;\\n\\n> Absence of synthesizability considerations \\n\\n- Thank you for this constructive feedback. We agree that synthesizability warrants discussion and have added a paragraph to the limitations section, cf. L285-288. While our manuscript focuses on standard small molecule datasets, we acknowledge that the absence of synthesizability considerations affects the practical application of fragment-based methods, particularly when moving from computational predictions to experimental validation. We believe that synthesizability is inherently linked to the training data, and that utilising a dataset of multi-component reactions [1] could help address this limitation.\\n\\n> Missing metrics (Novelty, Uniqueness, Validity)\\n\\n- We completely agree with the reviewer\\u2019s feedback and have missed to refer the reader to Appendix D.3, Tab. 4 in particular, which includes the mentioned metrics for Novelty and Uniqueness. Moreover, the number of samples to obtain 10k valid molecules can be deduced from the reported validity and amounts to roughly 12k for DiGress.\\n\\n> Clarification of the benchmarking procedure\\n\\n- As we outline in L321-322, all models are trained on the ZINC data and we conduct the GuacaMol and MOSES benchmarks on the respective test set of the ZINC data. We updated our manuscript to better reflect this setup, cf. L322. Note that we use the other mentioned datasets as part of the zero-shot experiment (Appendix D.6 and Figure 8) that investigates the ability of MAGNet and the baseline models to represent molecules from different distributions. Across datasets, we show that the inductive bias as set through our fragmentation and scaffold approach leads to more faithful reconstruction of molecules from their latent codes. \\n\\n> Inclusion of methods that specifically optimise for goal-directed generation of molecules \\n\\n- We appreciate this suggestion. In our manuscript, we focus on the comparison with fragment-based methods to establish a clear methodological context. For this, we delineate between the optimisation scheme (Gradient ascent and MSO) and the underlying generative procedure (MAGNet, MiCaM, MoLeR). Following the reviewer\\u2019s suggestion, we checked the setting used in [2]. Comparing our results for the goal-directed benchmark with those reported in [2], MAGNet should perform competitively. However, the goal-directed generation setting in our experiments and the results reported in [2] are not directly comparable due to the varying number of oracle calls. For the final version of the manuscript, we would be happy to include a thorough evaluation of MAGNet following the benchmark setting in [2].\\n\\n> Clarifications \\n- We updated the caption of Fig.1 and aligned it better with the structure (a,b,c) of the figure as suggested by the reviewer.\\n- In structuring the manuscript, we intended to distinguish between the general mathematical formulation of our factorisation approach (Section 3) and MAGNet's specific implementation details (Section 4), with Fig. 2 illustrating the key concepts of our factorisation, cf. L130-133. We would appreciate more specific guidance on which aspects are not sufficiently visualised in its current version. \\n- Following the reviewer\\u2019s suggestions we tried to provide more specific information about the main baselines through the manuscript. In particular, we would like to refer the reviewer to L300-307.\\n- Thank you for highlighting a lack of clarity for the conditioning setting presented in Fig. 5. In our response to Reviewer hxc4, we provide a more comprehensive discussion of the practical relevance and potential applications of this conditioning approach. We furthermore updated Sec.5.4 with these details to better illustrate how MAGNet enables flexible conditioning.\\n\\n> Minor Comments\\n- We have addressed all minor comments and typos in the updated PDF. Thank you for thoroughly reviewing the manuscript and providing such detailed feedback.\\n\\n> Impact of vocabulary size and experiments analysing the effect \\n- We thank the reviewer for their suggestion. We believe an analysis of this kind greatly supports the claims and contributions of our work and have included a detailed discussion in our general response to highlight these results for all reviewers.\\n\\n&nbsp;\\n\\nWe appreciate the positive feedback and are excited to move forward with these improvements. We hope that we addressed all outstanding concerns satisfactorily and welcome further discussion.\\n\\n&nbsp;\\n\\n[1] Graziano et al., Multicomponent reaction-assisted drug discovery: A time-and cost-effective green approach speeding up identification and optimization of anticancer drugs, 2023 \\n[2] Gao et al., Sample Efficiency Matters: A Benchmark for Practical Molecular Optimization, 2022\"}", "{\"comment\": \"&nbsp;\\n\\nWe want to thank the reviewer for their thoughtful review. We particularly appreciate your recognition of the significance of the problem we address and our contribution in highlighting the limitations of current motif-based approaches. We provide detailed answers to the raised questions and concerns in the following. \\n\\n&nbsp;\\n\\n> Is the problem of structural diversity fully resolved? & Significance of experimental results\\n- We believe that making the community aware of critical limitations of fragment-based methods together with our novel method and evaluation are important contributions and we hope that our work will spark followup research that will eventually resolve the challenges regarding structural diversity. \\n- Problem and Evaluation: In our work, we propose a new perspective on the evaluation of molecular generative methods. While standard benchmarks rely heavily on metrics like FCD scores, our work demonstrates that this focus is insufficient. A similar observation has also been made in the context of computer vision [1]. Accordingly, we introduce novel evaluation criteria to account for the importance of structural expressivity, which is an overlooked aspect of molecular generation.\\n- Approach: Our scaffold-motif approach represents a step towards addressing these limitations, even if it does not fully resolve them. The drop in standard metrics like FCD and SA actually highlights an important trade-off: methods optimised for these conventional metrics sacrifice structural expressivity. With MAGNet, we aim to strike a balance that is overall beneficial for molecule generation and, thus, drug discovery.\\n\\n> Impact on separation of structure and features on validity of generated samples, Q: Validity and Connectivity\\n- We agree that the impact of the changed inductive bias has to be analysed thoroughly.\\n - For the effect on sample quality, we specifically dedicated our analysis in Sec.5.3 to the accurate prediction of features both in terms of generation (Fig.4 a,b) and reconstruction (Fig.4c). We clearly demonstrate that the free featurisation as implemented through MAGNet is not a limitation but a feature and beneficial for sample diversity. \\n - We want to further support these findings by referring the reviewer to App. D.3, Tab.4 in particular, which complements the benchmark from Sec. 5.2. Unfortunately, we missed to make this reference in the initial submission and have updated the manuscript accordingly, cf. L419. Due to validity constraints, MAGNet achieves 100% validity. Like other methods, we always consider the largest connected component for analyses. For the complete generated output, as shown in Fig.12, the retention percentage for MAGNet is >93%, with 80% of the generated molecules being fully connected.\\n\\n> Writing\\n- We structured the manuscript to separate the general mathematical framework of our factorisation (Sec. 3) from the model (MAGNet) that matches this factorisation (Sec. 4). Following the reviewer\\u2019s feedback, we have clarified the manuscript structure with an introductory note in Section 3 and detailed the network architectures throughout Section 4, making the distinction between theoretical framework and implementation more explicit.\\n- Many thanks for pointing out the incorrect panel order in Fig.3. We have corrected this in the updated pdf. \\n- We have updated all references to cite the published versions of the papers and revised the manuscript accordingly.\\n\\n> Q: Generation speed: \\n- We conducted a quantitative analysis on the generation speed. While the reported results certainly reflect aspects of the methodological design, the specific generation times also heavily depend on the chosen framework and implementation. However, our choice to design MAGNet in a hierarchical OS fashion relies more on the advantages connected to its ability to consider the entire molecular context at all generation steps. Please refer to our response to reviewer hxc4 (W1) for additional details. \\n|Method|s/10 samples|\\n|-|-|\\n|PS-VAE|0.293|\\n|MiCaM|0.583|\\n|MoLeR|2.760|\\n|MAGNet|1.200|\\n\\n> Q: Vocabulary Size: \\n- Using our fragmentation approach, the vocabulary reduces to 347 distinct scaffolds, which corresponds to 7371 motifs. Note that our fragmentation is more effcient than those used in MoLeR and MiCaM, which amount to roughly 14000 and 15600 motifs, respectively. \\n- For a fair comparison, we chose the same vocabulary size (350) for all baselines. Inspired by the question of reviewer w2Yn, we also compare MAGNet against a MoLeR model with a vocabulary size of 2k, which we discuss in our general response.\\n\\n&nbsp;\\n\\nWe hope that we have adequately addressed the raised concerns and believe the reviewer's feedback has enhanced the quality of our manuscript. We look forward to further discussion.\\n\\n&nbsp;\\n\\n[1] Jiralerspong et al., Feature Likelihood Score: Evaluating Generalization of Generative Models Using Samples, 2023\"}", "{\"summary\": \"This paper proposes a new way to address chunk-based molecule generation. Instead of using fully specified molecular subgraphs (motifs) similarly to prior work, the authors instead abstract out motifs to their connectivity skeletons, which allows for a smaller vocabulary to cover a wider range of possible motif realizations. The authors then show a factorization of the generative procedure that first builds the scaffold by assembling these motif skeletons and then gradually fills in the atom features. The approach is verified on standard generation and optimization benchmarks, showing decent performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(S1): The authors address an important problem of molecule generation, and identify a reasonable gap in the capabilities of prior models that they then try to address. The proposed approach is motivated clearly.\\n\\n \\n\\n(S2): The paper is generally well-written. The experiments are conducted across various setups that are common/relevant in this domain. While the empirical performance isn't across-the-board amazing, MAGNet seems to be a generally capable model with good performance overall, while improving on top of previous models in some settings, as well as enabling new capabilities (e.g. more ways to condition the generation on partial information).\", \"weaknesses\": \"(W1): The authors argue the most direct comparison is to other OS models, and focus on that angle. Setting aside the fact that MAGNet is not purely one-shot as the scaffold multiset `S` is decoded sequentially and autoregressively, even if we agree MAGNet is OS, is there any inherent value that comes from being an OS model? One thing that comes to mind would be faster inference, as stepwise generation models can be expensive due to repeatedly encoding the current partial graph. So, is MAGNet more efficient than sequential decoding models, e.g. MoLeR?\\n\\n \\n\\n(W2): More interesting conditioning settings depicted in Figure 5 could be explained in more detail. The partial molecule induces a partial scaffold multiset `S`; do you then use this partial multiset and continue generating to get a full multiset, then connect the scaffolds while forcing those of the connections that are implied from the conditioning? I assume extending the multiset `S` with further scaffolds cannot directly take into account the fact that some of the scaffold connections (or scaffold instantiations into specific motifs) are already known from the conditioning, because during training the scaffold multiset extension subnetwork assumes only a multiset of generic scaffolds is known. Could this be an issue causing the model to add scaffolds that don't fit well with the partial molecule? \\n\\n=== Nitpicks === \\n\\nBelow I list nitpicks (e.g. typos, grammar errors), which did not have a significant impact on my review score, but it would be good to fix those to improve the paper further. \\n\\n- Line 147: Denoting the join node as `j` could be an index clash given that the scaffolds being considered are denoted as `i` and `j` earlier in the sentence. \\n\\n- Line 155: \\\"factories\\\" -> \\\"factorize\\\" \\n\\n- Line 309: missing space\\n\\n=== Update 26/11 === \\n\\nAfter the author rebuttal I decided to raise my score from 6 (borderline accept) to 8 (accept).\", \"questions\": \"See the \\\"Weaknesses\\\" section above for specific questions.\\n\\nIf my questions/concerns are addressed, I would consider raising my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Among the reviewers, there was broad agreement that this work\\naddresses important and relevant problems for the task of molecule generation. Particularly, the proposed use of scaffolds instead of the (usually used) motifs was considered as an interesting and innovative contribution.\\n\\nThere were also some critical remarks concerning the technical description of related approaches, about potential limitation by only considering fragments (instead of synthesis), and about several details in the experimental evaluation and benchmarking process.\\n\\n After the rebuttal and discussion phase, however, I had the impression that most of these concerns could be addressed (at least partially), and I think that finally the strengths outweigh the weaknesses. Therefore I recommend acceptance of this paper\", \"additional_comments_on_reviewer_discussion\": \"Many points of criticism raised by the reviewers could be addressed in the rebuttal.\"}", "{\"comment\": [\"&nbsp;\", \"Thank you for the constructive feedback on our submission. In our response, we aim to clarify the remaining questions and concerns of the reviewer.\", \"&nbsp;\", \"> MAGNet is not purely one-shot (W1)\", \"We agree with the reviewer\\u2019s description and classification of our MAGNet being \\u201cnot purely one-shot\\u201d in the classical sense. As prompted also by reviewer Ry7i, we extend the discussion of this aspect in the updated manuscript and would like to refer the reviewer to both the response to reviewer Ry7i and App. C of the updated PDF.\", \"> Is there any inherent value that comes from being an OS model? (W1)\", \"As the reviewer correctly pointed out, sequential models incur additional computational overhead due to the repeated encoding of partial molecules, generally making them slower than models that do not condition on partial molecules. This difference is evident when comparing generation speeds, which we discuss in more detail in our response to reviewer 8uqV.\", \"However, we believe that models that avoid conditioning on partial molecules have another key advantage: their latent representation must fully capture the essential features of the molecule. In contrast, when conditioning on both the partial molecule and the latent representation, the influence of the latent code is not guaranteed. In OS models like MAGNet, the latent code is central to the generation process and dictates all subsequent steps, ensuring faithfulness to the representation, which we, for example, demonstrate through our displacement analysis presented in Fig.7.\", \"The hierarchical architecture is essential for MAGNet. Scaffold representations can vary significantly based on their position within the molecule. For example, the Murcko scaffold CC(C)(C)C often appears as CS(=O)(=O)C in a central position, but as CC(F)(F)F in less central contexts. MAGNet requires a complete understanding of the coarse-grained molecular arrangement to assign the correct representations to each scaffold.\", \"> Could [this conditioning scenario] be an issue causing the model to add scaffolds that do not fit well with the partial molecule? (W2)\", \"The hierarchical design of MAGNet, which avoids conditioning on partial molecules, offers greater flexibility for tasks like the one presented in Figure 5, where the model needs to condition on disconnected scaffolds. This type of multi-scaffold conditioning is challenging for sequential models, whereas it aligns naturally with MAGNet's factorised approach. Following the reviewer\\u2019s feedback, we have extended the description of this experiment in the updated PDF. In general, we share the concern that within our factorisation it might occur that scaffolds are connected such that they do not match with the conditioning fragment. This is likely linked to the complexity of the conditioning scenario.\", \"While we have not experienced a mismatch between the partial molecule and generated molecule so far, we recognise that it may arise in other scenarios or datasets. To support the model in such scenarios, we can envision:\", \"Refined Latent Space Navigation: Sampling latent codes in a subspace of the latent space that is in accordance with the conditioning fragments.\", \"Filtering: Incorporating a post-hoc filtering mechanism to eliminate incoherent samples.\", \"It would also be possible to incorporate additional architectural changes as listed below, while preserving the reliance on hierarchical decoding with scaffolds as building blocks:\", \"Additive Conditioning: Introducing an auxiliary term, like a scaffold embedding, or constraint in the generation process that explicitly accounts for scaffold compatibility with the partial molecule.\", \"Hierarchical Conditioning: Augmenting the model\\u2019s latent space to encode more detailed inter-scaffold relationships, ensuring compatibility even for disconnected scaffolds.\", \"Correction Step: Allowing the model to iterate over its prediction to ensure consistency between the condition and its final prediction.\", \"> Nitpicks\", \"We sincerely thank the reviewer for carefully pointing out the typos in our manuscript. We have corrected these issues and believe that it improves the clarity and presentation of our work.\", \"&nbsp;\", \"In conclusion, we appreciate the reviewer\\u2019s positive evaluation of our work and hope to have addressed the concerns raised to the reviewer\\u2019s satisfaction.\"]}", "{\"comment\": \"We are grateful for your positive endorsement and detailed critique, which significantly enhanced the clarity and presentation of our research. We found the discussion with the reviewer a pleasant experience and are thankful for the improvements it brought for the final manuscript.\"}", "{\"title\": \"General Response to all Reviewers\", \"comment\": \"&nbsp;\\n\\nWe sincerely thank all reviewers for their valuable feedback, questions, and suggestions. We are particularly encouraged by the positive remarks regarding the novelty of our approach, the critical evaluation of motif-based methods, and the thoroughness of our experimental analysis.\\n\\n&nbsp;\\n\\nWhile we provide detailed responses to individual comments, we would like to highlight the key ways in which the feedback has improved our manuscript:\\n- As encouraged by reviewers Ry7i and hxc6, we have extended the discussion around the generation procedure terminology (OS vs. Sequential). We believe that the updated PDF better contextualises the classification of individual models, while still guiding the reader through the experimental results.\\n- Based on the feedback from reviewer hxc6, we have improved the description and explanation of the conditional generation experiments.\\n- The feedback by reviewers 8uqV and w2Yn prompted us to evaluate MoLeR with a significantly larger vocabulary (2k motifs), strengthening the discussion around the expressivity of our vocabulary and fragmentation:\\n - Throughout the experimental section, all methods with a variable vocabulary were trained with the same number of motifs in their vocabulary, extracted from the same dataset, to allow for a fair comparison.\\n - MiCaM is one example of a model with a very large vocabulary, >15,000 motifs. Besides issues during the training because of the large vocabulary (reference), we observe throughout our analysis that this does not aid the model in the decoding of rare scaffolds.\\n - As an additional baseline for our experiment, we train a MoLeR model with a vocabulary of size 2,000, which is almost 6x the size of the MAGNet vocabulary. In App. D.1 of the updated PDF, we extend our analysis on the sampling and reconstruction of scaffolds. Interestingly, we find that an increased vocabulary can help with the sampling of scaffolds, in particular uncommon ones. However, we can also observe that even with a significantly larger vocabulary, MoLeR is not able to utilise this vocabulary to faithfully reconstruct uncommon scaffolds. We conclude that during unconditional generation, the model samples such scaffolds by chance, but cannot make use of them when it is explicitly given by the latent representation, as it is required when using the latent space for downstream tasks and procedures.\\n\\n&nbsp;\\n\\nIn summary, we greatly appreciate the constructive feedback we received. The reviewer\\u2019s suggestions have strengthened the claims of our work and improved the overall quality of the manuscript.\"}", "{\"comment\": \"Thank you for your detailed response to my comments. I now have a clearer understanding of how your method constructs a more efficient vocabulary and accommodates complex fused ring structures. I appreciate the effort you have put into addressing my concerns.\\n\\nBased on this, I have raised my score from 3 to 5. However, I could not provide a higher score as I feel that the experimental results are not yet fully competitive; for instance, metrics such as FCD and SA show some decline compared to certain baselines.\\n\\nBest regards,\\n\\nReviewer\"}", "{\"comment\": \"&nbsp;\\n\\nWe thank the reviewer for engaging in a discussion with us. Moreover, we are happy to learn that our response addressed the reviewer\\u2019s mentioned aspects satisfactorily. Below we provide a detailed answer to the reviewer\\u2019s question regarding MAGNet\\u2019s scaling behaviour.\\n\\n&nbsp;\\n\\n> MAGNet is around 2x faster than MoLeR\\n- We want to highlight that this setting, as correctly pointed out by the reviewer, is related to a small-scale experiment where we aimed to match generation speed with the underlying generation process (OS or sequential). \\n- Note that all considered models are in the same magnitude of generation time. DiGress, for example, as a diffusion model, is much slower, taking around 30 seconds per 10 molecules.\\n- In our response to Reviewer 8uqV, we already indicated that these numbers are conflated by the underlying DL framework, general implementation, and/or specific optimisation procedures.\\n\\n> Does the speed comparison change when decoding 100+ molecules at a time?\\n- Yes and no. In a large-scale setting, with large batch size and many thousands of molecules, MAGNet will benefit from larger batch sizes. MoLeR, on the other side, additionally benefits from (model-independent) optimisations specifically aimed at large-scale and parallel execution, leading to faster generation times.\\n- Note that such optimisations (e.g. static graph execution, multi-GPU inference, advanced batching) are not yet implemented for MAGNet as we considered them subordinate compared to our main contribution of addressing the limited diversity and expressivity of molecular generative models.\\n- However, during this discussion round, we got the impression that generation speed and, therefore, optimisations targeted at that, are generally relevant for the community. We are confident that we can provide support for fast large-scale inference as part of the final code release for MAGNet. We expect that we will likely match or exceed MoLeR\\u2019s generation speed also in the mentioned large-scale setting. \\n\\n&nbsp;\\n\\nWe thank the reviewer for asking this relevant question, giving us the opportunity to clarify the difference between a large-scale setting and the conducted small-scale experiment. We hope that this answers the question to the reviewer\\u2019s satisfaction and remain available for further discussion.\"}", "{\"comment\": \"&nbsp;\\n\\nThank you for your follow-up and further clarifying your concern. We appreciate the opportunity to further elaborate on how MAGNet addresses the challenge of generating complex fused ring systems.\\n\\n&nbsp;\\n\\nIn our work, we show that generative models operate with a fixed vocabulary, effectively limiting the number of \\u201cspots\\u201d available to represent structural components such as fused rings:\\n- **Models fail at generating complex structures, such as fused systems, that are not present in the vocabulary**. We demonstrate that the reliance on single atom representations in such scenarios is insufficient, cf. Figure 3 and 4.\\n- Moreover, **the naive approach of expanding the vocabulary size is not adequate.** We show that approaches like MoLeR (2000) and MiCaM cannot leverage their significantly larger vocabularies to address the challenge of generating complex fused ring systems, cf. Figure 9. \\n\\n&nbsp;\\n\\nWe conclude that **it is essential to allocate the limited vocabulary capacity wisely**. We present a solution to this in our work:\\n- Traditional motif-based models like MoLeR and PS-VAE allocate a disproportionate number of \\u201cspots\\u201d in their vocabularies to redundant scaffolds (from a structural perspective) with different motif representations, cf. Figure 6b. For example, slight variations in a single-ring structure can occupy multiple spots in their vocabulary, resulting in inefficiency. \\n- MAGNet, on the other hand, takes a more targeted approach: it *learns* atom and bond types, freeing up vocabulary capacity to represent the structures that are truly challenging to learn\\u2014complex scaffolds such as fused rings and uncommon junctions.\\n- **Because of its efficient use of vocabulary capacity, MAGNet can store and represent a broader range of fused ring systems and retains only the essential structural information in its vocabulary.**\\n- Minor but still relevant, our process of building the vocabulary (fragmentation), cf. Section 4.1, is already more efficient than existing approaches (cf. previous answer) which further benefits MAGNet in generating complex structures.\\n\\n&nbsp;\\n\\nIn summary, MAGNet\\u2019s design and the underlying expressive vocabulary enable the effective generation of complex fused ring systems. We sincerely hope this explanation clarifies why our method is inherently better equipped to address the mentioned challenge.\"}", "{\"title\": \"Thanks\", \"comment\": \"I have read the rebuttal and the revised version. I truly appreciated the great effort put in by the authors to respond to my doubts. All my concerns have been addressed, therefore I am glad to change my initial judgment. I'll be giving full acceptance and I sincerely hope this paper will make it to the conference.\\n\\nGood luck!\"}", "{\"summary\": \"The paper presents MAGnet, a generative model for molecules. MAGnet is based on scaffolds (an abstraction of molecular fragments without atom and bond information, just the graph structure), which are introduced to factorize of the molecule distribution. MAGnet is a VAE-like architecture which generates new molecules by first predicting a scaffold set and its connectivity from latent space, then by predicting the atom and bond types for each scaffold, and finally by adding leaf nodes. Experiments show good performances in comparison with several baselines, on standard benchmarks such as GuacaMol and MOSES.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"using scaffolds instead of motifs qualifies as an innovative proposal, which could be of value to the community\", \"experimental evaluation is extensive in both depth (many baselines) and width (two benchmarks)\"], \"weaknesses\": \"throughout the paper, MAGnet is considered a one-shot (OS) molecule generator, when in fact it is auto-regressive (AR). Appendix B.2 states that the set of scaffolds is generated auto-regressively, so I cannot understand how it could be considered one-shot. According to the definition by Zhu et al. (2022):\\n\\n\\n Sequential generation refers to generating graphs in a set of consecutive steps, usually done nodes by nodes and edges by edges. One-shot generation, instead, refers to generating the node feature and edge feature matrices in one single step.\", \"this_is_very_different_from_what_it_is_stated_in_section_2_of_the_magnet_paper\": \"Zhu et al. (2022) categorise the generation process further into sequential methods, building molecules per fragment while conditioning on a partial molecule.\\n\\nto justify the fact that MAGnet is OS. According to the same paper that is cited, MAGnet belongs to the AR category. If MAGnet is AR, then most claims made in the paper need to be toned down or changed because it falls in the same category as MoLeR, which has usually comparable or better performances than MAGnet. For example, the claim \\\"MAGnet is the best OS generator\\\" no longer makes sense.\", \"questions\": \"I'll be honest here. I really liked this paper and I was going to recommend clear acceptance until I understood that MAGnet was an AR model \\\"disguised\\\" as an OS model. From that point onward, I couldn't shake the feeling that it was framed as OS only because, if put in the AR category, the results would become not so impressive (although still good). I really hope this was an unintentional mistake or misinterpretation by the authors. That's a shame in my opinion, since I believe the proposal is original and the evaluation was very thorough, the technical side of this paper is almost flawless.\\n\\nI will be recommending rejection of this paper for the moment, because in this form too many claims made in this paper stem from an incorrect premise. However, I am willing to hear from the authors why they consider MAGnet one-shot instead of sequential and will re-evaluate whether I could reconsider my judgment after the rebuttal phase.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. However, I still feel that my main concern has not been addressed. Based on my understanding, your method still struggles to generate complex fused ring systems as shown in Figure 1a, correct? These systems involve not only innovative atom types on common rings but also novel innovations on the scaffold itself. As you mentioned in your motivation, some known drugs feature such unique scaffolds. This is something that traditional motif-based methods cannot easily generate from their vocabulary, and it seems that your scaffold vocabulary also fails to recover these types of scaffolds. So, while you have raised an important issue, it appears that the method has not yet effectively solved the problem you highlighted.\"}", "{\"comment\": \"I see, thank you. Indeed, it would be great for the released code to allow fast inference. In my view, there are several older works that, despite being well-cited, never got used that much by practitioners in the drug discovery community, as they didn't make use of parallelism/batching (and so were too slow to use in practice).\\n\\nBy the way, I also found the new results comparing to MoLeR with a very large vocabulary interesting. It seems believable that this model would indeed struggle to make use of a very large vocabulary efficiently.\\n\\nI raised my score to a full accept to reflect that most of my concerns were addressed.\"}", "{\"comment\": \"Thank you for your detailed response and for providing additional results. I agree that swapping the presentations of Fig. 1a and Fig. 10 could potentially highlight your contributions more effectively.\\n\\nBut, I still find it unclear why your vocabulary is considered more expressive and better equipped to accommodate more fused rings. From my perspective, abstracting motifs to scaffolds does not seem to address this challenge directly. Could you kindly provide a more intuitive explanation?\"}" ] }
5FKIynMPV6
Bounds on the Reconstruction Error of Kernel PCA with Interpolation Spaces Norms
[ "Yang Zhou", "Weihao Lu", "Qian Lin" ]
In this paper, we utilize the interpolation space norm to understand and fill the gaps in some recent works on the reconstruction error of the kernel PCA. After rigorously proving a simple but fundamental claim appeared in the kernel PCA literature, we provide upper bound and lower bound of the reconstruction error of the empirical kernel PCA with interpolation space norms under the assumption $(C)$, a condition which is taken for granted in the existing works. Furthermore, we show that the assumption $(C)$ holds in two most interesting settings ( the polynomial-eigenvalue decayed kernels in fixed dimension domain and the inner product kernel on large dimensional sphere $\mathbb S^{d-1}$ where $n\asymp d^{\gamma}$) and compare our bound with the existing results. This work not only fills the gaps appeared in literature, but also derives an explicit lower bound on the sample size to guarantee that the (optimal) reconstruction error is well approximated by the empirical reconstruction error. Finally, our results reveal that the RKHS norm is not a relevant error metric in the large dimensional settings.
[ "kernel principal component analysis", "reproducing kernel Hilbert space", "high-dimensional statistics", "convergence rate", "interpolation space" ]
https://openreview.net/pdf?id=5FKIynMPV6
https://openreview.net/forum?id=5FKIynMPV6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ttAOwXV1Xc", "mpQoY4005V", "lZOJyuHpmb", "lVmARO2lkA", "amao6Kr7vc", "advcm9XgYw", "XnaUX0dVTl", "RJZL3JQ4na", "MRrS2vZbDy", "Fx6qxHOS7i", "BGQZ6dVNZV", "7jJspvQcVm", "2VhVq1BLNR" ], "note_type": [ "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1730490802216, 1732698522496, 1733127157808, 1732540968205, 1732613268617, 1732541545504, 1732539625603, 1730634600598, 1732627674680, 1732541311712, 1732538826367, 1731063392765, 1730658237942 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_3t7G" ], [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_CWKU" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_aCWt" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_bkUh" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Authors" ], [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_aCWt" ], [ "ICLR.cc/2025/Conference/Submission5831/Reviewer_CWKU" ] ], "structured_content_str": [ "{\"summary\": \"The paper focuses on understanding and bounding the reconstruction error of kernel Principal Component Analysis (PCA) using interpolation space norms. This work fills gaps in previous studies on kernel PCA by providing rigorous proofs and new bounds on the reconstruction error under specific conditions. Key contributions include: 1.Upper and Lower Bounds on Reconstruction Error; 2.Applications to some interesting settings including Fixed Dimension Domain, for polynomially eigenvalue-decayed kernels and High-Dimensional Sphere, for inner-product kernels where the dimension grows along with sample size. Moreover, he paper reveals that using [\\\\mathcal{H}]^{1}-norm in high-dimensional settings may be unsuitable due to inconsistent error behavior. In addition, a \\\"periodic plateau\\\" phenomenon in convergence rates is observed, where the reconstruction error rate stabilizes over certain intervals as the number of components (\\\\ell) changes.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The most significant strength is I think overall the paper addresses a gap that exists among the most recent works in a rigorous and inspiring way. It involves both the rigorous theoretical contribution mentioned above in the summary but also provides a novel use of Interpolation Norms that I personally find helpful and interesting for technical proofs. For applications, the high-dim behavior insights mentioned above are also of great practical importance given the topic kernel PCA is a practically popular method. Besides, the paper is well distinguished from the existing works. The paper offers a thorough comparison with existing bounds, demonstrating improvements over previous results and discussing where previous work lacked rigor. This transparency about advancements and limitations strengthens the credibility of the results.\", \"weaknesses\": \"Overall, the paper has few weaknesses. One point can be that its results are largely theoretical, with only limited empirical validation. More comprehensive experiments across various datasets and settings would strengthen the paper by providing practical evidence to support the theoretical claims. Since after kernel PCA is a widely applied method, adding various types of empirical behavior would definitively make the paper more appealing. The other point, which would rather be some improvements, is more from a practical point of view that for example, there are parameters say $s$ in 3.4 Corollary is in practice unknown. How to do adaptation to find a data driven $n$ is also needed here as there are some works focusing on adaptation on smoothness parameters in terms of estimation and regression etc. In general, the weaknesses mainly lie in the practical side (not the main focus of the paper), which however the strengths far exceed.\", \"questions\": \"1. Can you please elaborator more on your paper comparing with \\\"Rosasco L, Belkin M, De Vito E. On Learning with Integral Operators[J]. Journal of Machine Learning Research, 2010, 11(2)\\\", where the kernel $\\\\Sigma$ in your paper is actually an integral operator?\\n\\n2. How about extending the current result to the manifold setting? As there are existing results of the convergence of some particular kernel based Laplacians on the manifolds.\\n\\n3. Given similarity between MDS and kernel PCA, would the error bounds work also for MDS?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your answer. Regarding Q2, perhaps it could be useful to future readers to remark that $C_3$ is polynomial in the parameters (except beta, but the bound can be rewritten as $C'_3 (2\\\\ell)^{2\\\\beta}$), and especially in $\\\\tau$.\", \"I still have questions on the proof of Theorem 2.5, notably the case $0<s<1$ (which is the main case of interest):\", \"What is meant by \\\"maximizing $R_s(\\\\psi_{l+1}, ...)$\\\" on line 859?\", \"Why does the equation on line 866 imply that $A$ must be block-diagonal?\", \"What is meant by \\\"$A_1$ is non-zero only on $\\\\ell$ $e_q$'s\\\" on line 868?\"], \"by_the_way_i_noticed_some_typos_in_the_updated_proof\": [\"on lines 792-800, the summation $q=1$ (or $p=1$) should be replaced by $q \\\\in N$ (or $p$)\", \"on line 802, $\\\\sum_j a_{pj} a_{qj} = \\\\delta_{pq}$, not $1$\", \"on line 840, the last two factors of the left hand side, $A_1$ and $\\\\Lambda$, are switched. This also impacts the reasoning for the case $s=1$ start on line 842.\", \"on line 851, it could be helpful to explain explicitly why the space spanned by $A_1$ is invariant by the operator $\\\\Lambda H \\\\Lambda$ (IIUC it's as a consequence of (6)).\", \"on line 852, it could also be helpful to point out explicitly that this operator is symmetric, so that the space spanned by $A_2$ is invariant.\"]}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"**W**\\n\\nWe are sorry for not making our presentation clearer. We will aim to improve clarity in the revision. However, due to the complexity of kernel PCA and the limited number of pages in an ICLR manuscript, the use of some common notations might be unavoidable. To reduce potential miscommunication, please allow us to provide a more readable summary below:\\n\\n**Why $ [\\\\mathcal{H}]^{s} $ norm**\\n\\nReconstruction error is often used to evaluate the performance of PCA and KPCA. For PCA, the definition is \\n$$\\nR(\\\\beta_1,\\\\cdots,\\\\beta_{\\\\ell}):=\\\\mathbb{E}_ {X \\\\sim P} \\\\left\\\\| X-\\\\sum_{i=1}^{\\\\ell}(X^\\\\mathsf{T}\\\\beta_i)\\\\beta_i\\\\right\\\\|_ {2} ^ {2}.\\n$$\\nFor KPCA, the definition is \\n$$\\n\\\\mathcal{R}\\\\left( \\\\psi_{1},\\\\ldots,\\\\psi_{\\\\ell} \\\\right)\\n:=\\n\\\\mathbb{E}_ {X \\\\sim P} \\\\left\\\\|k(\\\\cdot, X)-\\\\Pi\\\\left(\\\\psi_1,\\\\ldots,\\\\psi_{\\\\ell}\\\\right)k(\\\\cdot,X)\\\\right\\\\|^2,\\n$$\\nwhere \\n$$\\n\\\\Pi(\\\\psi_1,\\\\ldots,\\\\psi_{\\\\ell}):=\\\\sum_{i=1}^{\\\\ell}\\\\left\\\\langle \\\\cdot, \\\\psi_i\\\\right\\\\rangle_{\\\\mathcal{H}}\\\\psi_i.\\n$$\\nSince kernel PCA operates in the RKHS (a function space derived by a kernel function) rather than $ \\\\mathbb{R}^d $, various norms can be used to define the reconstruction error of KPCA. Previous works consider the $ \\\\mathcal{H} $ norm (which is the original norm of RKHS) and the $ L_2(P) $ norm. Recent studies in RKHS regression strongly suggest that measuring the reconstruction error through $ [\\\\mathcal{H}]^{s} $-norm would provide new insights, linking the two norms used in previous literature ($ s=0 $ refers to the $ L_2(P) $ norm; $s=1 $ refers to the $ \\\\mathcal{H} $ norm).\\n\\n**Existing results:**\\n\\n**Lower bound of reconstruction error** \\nExisting results for the lower bound of reconstruction error mainly focus on the $ \\\\mathcal{H} $ norm and $ L_2(P) $ norm. For the $ \\\\mathcal{H} $ norm, the proof can be simply derived by extending similar results of PCA to KPCA. (Note that when extending PCA to KPCA, the Frobenius norm becomes the RKHS norm.) For the $ L_2(P) $ norm, there are some serious gaps in the previous proof that cannot be easily fixed. (See Remark 2.6 and Appendix C.1 for details.)\\n\\n**Upper bound of reconstruction error** \\nFor the $ \\\\mathcal{H} $ norm, the upper bound can be derived from the results in PCA. (See Proposition 2.2 for details.) Previous results of the upper bound under the $ L_2(P) $ norm are incorrect as they wrongly claimed that assumption (C) in Proposition 3.1 holds with high probability. (See the discussion below Proposition 3.1 and Appendix C.2 for details.)\\n\\n**Convergence rate of reconstruction error under the polynomial eigendecay setting** \\nPrevious literature shows that under this setting, the convergence rate is $ \\\\ell^{-\\\\beta+1} $ for the $ \\\\mathcal{H} $ norm case; $ \\\\ell^{-2\\\\beta+1} $ for the $ L_2(P) $ norm case. Here $ \\\\beta $ is the eigenvalue decay rate, see Assumption 3.3 for details.\\n\\n**Our contributions:**\\n\\n**Lower bound of reconstruction error** \\nWe establish a lower bound for the reconstruction error under the interpolation space norm. The lower bound is attained by proving that the eigenfunctions of the $ \\\\ell $ leading eigenvalues minimize the reconstruction error (Theorem 2.5). To the best of our knowledge, we are the first to provide a rigorous proof of such theorems, especially in the interpolation space norm case.\\n\\n**Upper bound of reconstruction error** \\nWe successfully derive the upper bound of the reconstruction error under the interpolation space norm by emphasizing the importance of assumption (C) (Theorem 3.1). We also make efforts to ensure that assumption (C) can be verified in different cases. (See Remark 3.2 for details.)\\n\\n**Convergence rate of reconstruction error under the polynomial eigendecay setting** \\nBy applying our Theorem 3.1 to the polynomial eigendecay setting, we derive the convergence rate of $ \\\\ell^{-(2-s)\\\\beta+1} $, consistent with previous results.\\n\\n**Convergence rate under the large dimension setting (hypersphere)** \\nWe consider a hypersphere inner product kernel (a commonly used kernel in high-dimensional cases). Under this setting, we derive the convergence rate of the reconstruction error (Theorem 3.8). This leads to two notable observations. **First**, when $ s=1 $, the reconstruction error does not converge, indicating that the RKHS norm is unsuitable as an error metric in high-dimensional cases. **Second**, we identify a periodic plateau phenomenon, where the reconstruction error rate remains constant over certain ranges of $ \\\\ell $ and decreases sharply over others (as illustrated in Figure 1(c)). A similar periodic plateau has been observed in other high-dimensional kernel methods.\\n\\nWe hope that the above summary convinces you of our work\\u2019s significant contribution to the community and may help you reevaluate our paper. Please let us know if you have any further questions.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"I would like to thank the authors for providing detailed clarifications and responses to my questions.\\nI confirm that I will keep my score to an 8.\"}", "{\"comment\": \"**W**\\n\\nWe appreciate the reviewer's thorough reading and approval of our work. For your first concern about the limited empirical validation, We have offered a simple empirical experiment in Section 4 in order to exemplify our result. However, due to the computation difficulty of experiments of kernel methods, especially under the large dimension setting, we haven't found a suitable dataset under which the experiment can be done within days.\\n\\nFor your second concern about the choosing of parameter such as $s$, we introduced the interpolation space to bridge the gap between the two previously used norms: the RKHS norm and the $L_2(P)$ norm. The norm in the interpolation space also highlights some theoretical insights, such as the limitations of the RKHS norm in high-dimensional contexts and the periodic plateau phenomenon, which might be difficult to detect using only the RKHS norm. Hence, the adoption of interpolation space is more from the theoretical side.\\n\\nFor your third concern about the choosing of $n$, we totally agree with you that a tighter bound of n rather than the bound $n \\\\geq \\\\mathcal{C}_3\\\\ell^{2\\\\beta}$ we give here may exist when there are some underlying properties of data. Investigating how to determine a data-driven n remains a complex issue that requires further exploration, and we may consider addressing this problem in the future. We appreciate your insightful suggestion on pursuing a data-driven approach rather than relying solely on a global bound.\\n\\n**Q1**\\n\\nThank you for your requirement of elaborating more on the paper of Rosasco L, Belkin M, De Vito E. In fact, we utilized Proposition 10 from their paper as a lemma to support our proof of Proposition 3.4 (see Lemma B.2). Their work enabled us to verify Assumption (C) under the polynomial eigenvalue decay assumption. We have also revised our manuscript to include a citation of this literature in Section 2.2, where the properties of the integral operators are discussed.\\n\\n**Q2**\\n\\nThank you for your insightful question on whether the current result can be extended to the manifold setting. To the best of our knowledge, the properties of specific kernels on manifolds are generally limited to a fixed-dimension context. Therefore, it might be feasible to extend the current results in Section 3.2 to manifolds. For the large dimension setting, an important lemma in our work is Lemma A.5, which shows that $ \\\\mu_k = \\\\Theta_d(d^{-k}) $ and $ N(d, k) = \\\\Theta_d(d^{k}) $ for $ k \\\\leq p+3$. Such structure in the spectrum allows us to describe reconstruction error, and can only be proved in the hypersphere setting to the best of our understanding. On the other hand, few results about the spectrum of kernels in the general domain can be found under large dimension settings (see, e.g., Remark 3.6). However, despite the difficulties of extending the large dimension results to the manifold setting, we believe that similar results shall exist, and the idea you point out is promising.\\n\\n**Q3**\\n\\nThank you for your insightful question regarding the potential extension of our results to MDS. To the best of our knowledge, MDS typically involves a dissimilarity matrix of the samples, with elements often based on distances. The method diagonalizes this matrix and selects the subspace spanned by the leading eigenvectors. However, MDS relies heavily on the specific samples chosen, and we have not come across literature addressing MDS in a population setting, which might make the analysis more challenging than in the case of kernel PCA. Additionally, defining the interpolation space in an MDS context could be problematic. Nevertheless, we do believe that MDS might yield similar or related results due to its parallels with empirical kernel PCA, and it is of great interest and importance to derive the similar results in MDS. Please let us know if there are any misunderstandings on our part regarding MDS.\"}", "{\"comment\": \"**W1: In the proof of Theorem 2.5, I don't understand why the orthonormality constraint $ (\\\\psi_1,\\\\ldots,\\\\psi_{\\\\ell})\\\\in B_{\\\\ell} $ does not appear in the stationarity condition of $ \\\\mathcal{R}_{s} $, equation (6). In fact, I did not manage to recover equation (6) at all. This step of the proof deserves more details.**\\n\\nThank you for your request for more details on the proof of Theorem 2.5. We have provided a proof that possesses higher readability. We hope that the updated version addresses your concerns about the proof of Theorem 2.5.\\n\\n**W2: The last sentence of the abstract is not very clear, and \\\"$[\\\\mathcal{H}]_1$ norm\\\" is not defined at that stage. Perhaps a more precise statement would be that \\\"the RKHS norm is not a relevant error metric in high dimensions\\\".**\\n\\nThank you for your advice on the last sentence of the abstract. We have already changed the expression according to your suggestion.\\n\\n**W3: The figures are difficult to read; the paper would greatly benefit from making them bigger. (E.g., with matplotlib, reduce figsize and increase dpi.)**\\n\\nThank you for your kind suggestion. Following your advice, we have increased the font size of Figure 1 and the figure size of Figure 2 in our revised manuscript. We hope this makes the figures clearer.\\n\\n**W4: The paper contains many typos and unusual wordings.**\\n\\nThank you for pointing out the typos and unusual wordings. We have revised our manuscript according to your suggestion.\\n\\n**Q1: Please address the missing step in the proof of Theorem 2.5 (see Weaknesses).**\\n\\nThank you for requesting further explanation of the proof of Theorem 2.5. We have already provided a proof with higher readability. Please let us know if you have additional questions about the proof.\\n\\n**Q2: In Corollary 3.4, what is the dependency of the constant $ \\\\mathcal{C}_3 $ on problem parameters? Is it polynomial?**\\n\\nThank you for your request to explicitly express **$ \\\\mathcal{C}_3$** . By Lemma B.2, we have **$ \\\\sup _ {j \\\\geq 1}|\\\\lambda _ j-\\\\widehat{\\\\lambda} _ j| \\\\leq 2\\\\kappa \\\\sqrt{2\\\\tau} n^{-1/2} $**, thus **$ \\\\widehat {\\\\lambda}_{\\\\ell+1}$**$ \\\\leq 2\\\\lambda_{\\\\ell+1} $ holds when $ 2\\\\kappa \\\\sqrt{2\\\\tau} n^{-1/2}\\\\leq c _ {\\\\beta} (\\\\ell+1)^{-\\\\beta} \\\\leq \\\\lambda _ {\\\\ell+1} $. Hence, we can choose $ \\\\mathcal{C} _ 3 = 2^{2\\\\beta}8\\\\kappa^2\\\\tau c _ \\\\beta^{-2} $.\"}", "{\"summary\": \"This paper examines the reconstruction error of kernel Principal Component Analysis (PCA) using interpolation space norms. The authors derive upper and lower bounds for the reconstruction error of empirical kernel PCA under specific conditions. They apply these bounds to two scenarios: polynomial-eigenvalue decayed kernels in a fixed-dimension domain, and the inner product kernel on a high-dimensional sphere, comparing their bounds to existing results. Notably, this work establishes a lower bound on the sample size necessary to ensure that the empirical reconstruction error approximates the optimal reconstruction error accurately. Additionally, the authors conclude that the $H^1$-norm is unsuitable for large-dimensional settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper provides a solid theoretical analysis of kernel PCA within the framework of a generalized norm, referred to here as the interpolation space norm.\", \"weaknesses\": \"The presentation is suboptimal, making the paper challenging to read in its current form. There are multiple instances where notations or concepts are referenced before they are formally defined, impeding the reader\\u2019s ability to verify the correctness of the claims.\", \"questions\": [\"In page 1, notation $\\\\otimes_H$ appears without definition. Also it is unclear whether $f(X)$ represents a vector or a matrix. Please clarify this notation and provide definitions for these terms.\", \"In page 3 there are discussions about the interpolation space norm, but this norm has not yet been defined. It\\u2019s difficult to follow the paper with references to undefined terms. Furthermore, a motivating explanation in the introduction about the significance of the interpolation space norm would be helpful. Why is this norm important, and how does it enhance the understanding of kernel PCA?\", \"What specific norm is used in equation (2)? Is it Frobenius norm?\", \"The presentation of this proposition could be improved. It references \\\"condition (2.11) in Rei\\u00df & Wahl (2020),\\\" yet does not restate the condition. Including the condition here would make the proposition more self-contained.\", \"In remark 2.3, what is meant by H norm?\", \"On page 5, inclusion map is not defined.\", \"In line 301 it is written $\\\\langle \\\\lambda_i^{(s-1)/2} \\\\phi_i, \\\\lambda_j^{(s-1)/2} \\\\phi_i \\\\rangle_{[H]^s} = \\\\delta_{ij}$. Is this a definition of this inner product or is it deduced from some other fact or property? For example shouldn't the right hand side be $\\\\lambda_i^{s-1} \\\\delta_{ij}$ instead?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you again for your important suggestions and approving the importance of our work and contribution to the community. Please let us know if you have further questions.\"}", "{\"comment\": \"**Q1**\\n\\nThank you for your concern about the definition. $\\\\otimes_{H}$ is defined in the notation part in page 4. For your convenience, we represent it here again. $a\\\\otimes_\\\\mathcal{H}a=\\\\langle a, \\\\cdot\\\\rangle_\\\\mathcal{H} a$, where $\\\\langle\\\\cdot,\\\\cdot\\\\rangle_\\\\mathcal{H}$ means the inner product in space $\\\\mathcal{H}$. In this circumstance, we provided another interpretation just behind $\\\\otimes_{H}$. We shall change $=$ to $:=$ to avoid misunderstanding. For your second request about $f(X)$, $f(X)$ is a random variable since $X$ is an element of $(\\\\mathcal{X},P)$ (note that $X$ and $\\\\mathbf{X}$ are different).\\n\\n**Q2**\\n\\nThank you for your feedback. We understand your concern that the temporarily undefined terms might make it challenging to follow the paper. In response, we indicated the sections where you can find detailed definitions. We believe it is more appropriate to present the definition of the interpolation space norm in the preliminaries section rather than in the introduction, as including all the detailed definitions upfront could make the introduction overly dense and harder to follow. The purpose of the introduction is to provide an overview and to set the context and motivation for the work. \\n\\nFor your second concern about the significance of the interpolation space norm, as is written in our contribution and the summary above, the interpolation space norm links the two previous types of reconstruction error considered by other literature, the RKHS norm reconstruction error and the $L_2(P)$ error. Also, considering the interpolation space norm also brings us some new insights, such as the two observations found in the large dimension case, the unsuitableness of RKHS norm and the periodic plateau phenomenon.\\n\\n**Q3**\\n\\nThank you for your question on the norm used in equation (2). As is shown in the notation part, the norm in equation (2) is the 2-norm in $R^d$. (From definitions in (2), both $X$ and $\\\\beta_i$'s are vectors in $R^d$)\\n\\n**Q4**\\n\\nThank you for your request for the condition mentioned in related works. We have already dded the condition in the manuscript. (All the major changes of the main manuscript have been marked blue. Typos and other minor changes are also updated in the new manuscript.) For your convenience, we display it here. \\n\\n**Condition** For all $s\\\\leq \\\\ell$, the following inequality holds:\\n$$\\n\\\\frac{\\\\lambda_s}{\\\\lambda_s-\\\\lambda_{\\\\ell+1}} \\\\sum_{j \\\\leq s} \\\\frac{\\\\lambda_j}{\\\\lambda_j-\\\\lambda_{\\\\ell+1}} \\\\leq n /\\\\left(16 C_3^2\\\\right)\\n$$\\n\\n**Q5**\\n\\nThank you for the request for a explanation of the $\\\\mathcal{H}$ norm in Remark 2.3. The $\\\\mathcal{H}$ norm is the norm of the RKHS. We mention the $\\\\mathcal{H}$ norm here before the RKHS part so as to announce that we want to compare this result with our result. We have added the explanation of $\\\\mathcal{H}$ norm in the remark according to your advice. Also, we added a reference in the manuscript where you may find detailed properties and explanations of RKHS.\", \"reference\": \"A. Caponnetto and E. De Vito. Optimal rates for regularized least-squares algorithm. Foundations\\nof Computational Mathematics, 7:331\\u2013368, 2007.\\n\\n**Q6**\\n\\nThank you for pointing out the need for clarification regarding the inclusion map on page 5. The inclusion map is a standard mathematical concept defined as follows: Given a subset $A$ of $B$, the inclusion map $I$ assigns element $x$ of $A$ to $x$, the latter $x$ is treated as an element of $B$,\\n$\", \"i\": \"A\\\\rightarrow B, I(x)=x.\\n$\\n\\n**Q7**\\n\\nThank you for your question. This is a definition of the inner product of the interpolation space, and we rewrite it in our revised manuscript as follows:\\n$$\\n\\\\langle \\\\lambda_i^{(s-1)/2}\\\\phi_i, \\\\lambda_j^{(s-1)/2}\\\\phi_j \\\\rangle_{[\\\\mathcal{H}]^s} :=\\\\delta_{ij}.\\n$$\\nNotice that ${\\\\phi_i}$ is a basis of the interpolation space, hence giving the definition of the inner product between $\\\\phi_i$ and $\\\\phi_j$ is enough to induce the inner product and norm of the whole interpolation space. The example you give is actually the inner product under the RKHS, which is a special case of the interpolation space when $s=1$.\"}", "{\"comment\": \"**W1**\\n\\nThank you for your advice that the hypersphere setting should be clarified in the introduction. We have updated the introduction according to your suggestion. \\n\\nAs for your second question, we consider the sphere setting for two main reasons:\\n\\n- On the one hand, the spectral properties of inner product kernels for uniform data distributed on a large-dimensional sphere are clear. In Lemma A.5 we have shown that $\\\\mu_k = \\\\Theta_d(d^{-k}) $ and $ N(d, k) = \\\\Theta_d(d^{k})$ for $ k \\\\leq p+3 $. Such a strong block structure in the spectrum, as described, leads to the periodic plateau behavior of the reconstruction error, and can only be proved in the hypersphere setting to the best of our understanding.\\n \\n- On the other hand, few results about the spectrum of kernels in the general domain can be found under large dimension settings (see, e.g., Remark 3.6).\\n\\nWe will try to generalize our results to kernels in general domains in future work.\\n\\n**W2**\\n\\nThank you for your request on more discussion of the gaps. We shall add the following discussion in the main text. However, due to the constraint of main text length, we are unable to provide the whole Appendix C in the main text. We hope that the discussion shall relieve your concern on the more discussion of gaps.\\n\\n- For Theorem 2.5 (which refers to Appendix C.1 for the gap), the previous gaps mainly focus on the wrong proof of the similar theorems, while the theorem itself remains true. The gap is mainly due to the wrong decomposition of some operators and sets. We give the correct proof of the theorem, and hence fill the gaps in the existing literature.\\n\\n- For Proposition 3.1 (which refers to Appendix C.2 for the gap), they claimed that assumption (C) holds with high probability. The gap in their proof is mainly due to the wrong claim that a specific operator is positive semi-definite.\\n\\nFor your second concern that the arguments fail for specific cases, we guess that it is the assumption (C) that you think only fails for very specific corner cases. However, as shown in Remark 3.2, previous works consider the U-statistics, whose assumption (C) is hard to be verified. In order to overcome such difficulties, we consider a different setting so that the verification of assumption (C) becomes possible. Hence, we have made efforts so that assumption (C) can be verified in some important cases, rather than just pointing out a corner case under which the previous literature exists gaps.\\n\\n**W3**\\n\\nThank you for your advice on the discussion of related works. After reading the literature you provided, we find that G. Santin and R. Schaback provide a link between the space spanned by the $ \\\\mathcal{H} $ eigenfunctions and the space spanned by the $L_2 $ eigenfunctions. The literature you provided is closely related, and we have added the discussion in our work accordingly.\\n\\n**Q1**\\n\\nThank you for your suggestion of adding the introduction of constants before using them in Section 1. We guess that the constants you mention are $ \\\\mathcal{N}_{\\\\Sigma}(t) $, which is a coefficient that can be bounded by no more than $ O(1/n) $. We have already added the explanation in the table. Please let us know if you have other questions about the constants used. (All the major changes of the main manuscript have been marked in blue. Typos and other minor changes are also updated in the new manuscript.)\\n\\n**Q2**\\n\\nThank you for your request to explicitly exhibit the condition. We have already added the condition in our revised manuscript. For your convenience, we also display it here.\\n\\n**Condition:** For all $ s \\\\leq \\\\ell $, the following inequality holds:\\n$$\\n\\\\frac{\\\\lambda_s}{\\\\lambda_s - \\\\lambda_{\\\\ell+1}} \\\\sum_{j \\\\leq s} \\\\frac{\\\\lambda_j}{\\\\lambda_j - \\\\lambda_{\\\\ell+1}} \\\\leq \\\\frac{n}{16 C_3^2}\\n$$\\n\\n**Q3**\\n\\nThank you for your question considering whether Figure 1 corresponds to an experiment. Figure 1 is only a graphical illustration of the theoretical results, not an experiment. The experimental part is in Section 4. We will clarify in our manuscript that Figure 1 is only an illustration according to your suggestion.\"}", "{\"summary\": \"The paper gives bounds on the recontruction error of kernel PCA, measured in the full scale of interpolation spaces $[H]^s$ between $L_2$ ($s=0$) and the RKHS ($s=1$).\\nAnalogous results exist in the literature for $s=0$ and $s=1$. However, the paper identifies a gap in the existing proofs for $s=0$ and gives an alternative, correct proof. Moreover, the results for $0<s<1$ are completely novel up to my knowledge, and they correspond to the existing ones in the limiting cases.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The topic is of current interest, and the results are clearly presented, discussed, and placed in the context of the existing literature.\", \"The new bounds extend the existing results to any $0\\\\leq s\\\\leq 1$. The extension is significant and relevant for applications.\", \"The identification and correction of a bug in the result $s=0$ is interesting and relevant to the community.\"], \"weaknesses\": [\"The results in large dimensions hold on the sphere, as is clearly explained in Section 3.3. and especially in Theorem 3.8. This is a reasonable setting, but it should be made clear upfront in the introduction (e.g., in Table 1 and in Section 1.2 under \\\"Convergence rate of empirical kernel PCA in large dimensions\\\"). It would also be interesting to know more precisely what are technical limitations beyond the sphere?\", \"The results are in part motivated by filling gaps in the existing literature, namely those discussed in Appendix C1 and Appendix C2. To the best of my understanding, these gaps are identified correctly. But the claim is quite significant, and it should be better discussed in the main text. Also, according to the two appendices, the existing arguments fail for very specific corner cases: An effort should be made to clarify if these are cases of general interest.\", \"There are results in the approximation theory literature that seem to be closely related and should be discussed, e.g. Theorem 3 in [1] seems to prove a version Theorem 2.5 for s=0. See also [2].\", \"[1] G. Santin and R. Schaback, Approximation of eigenfunctions in kernel-based spaces, Adv. Comput. Math. (2016)\", \"[2] I. Steinwart, A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths, J. Approx Theory (2016)\"], \"questions\": [\"Besides the points discussed above, there are the following minor points:\", \"Several constants are used in the introduction (Section 1) without being introduced. This makes the discussion sometimes difficult to follow.\", \"What does condition (2.11) in Reiss and Wahl (2020), quoted in Proposition 2.2, mean?\", \"Figure 1: The experimental setup is missing here, and it's unclear whether the plots correspond to an actual experiment. This should be specified.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work is concerned with kernel PCA, and specifically with the statistical performance of the empirical estimator, defined as the $\\\\ell$ principal components of the empirical kernel matrix $\\\\widehat{\\\\Sigma}_{ij} = k(X_i, X_j)$.\\nThe main contributions are upper bounds on the empirical estimator's reconstruction error as a function of $\\\\ell$, for several error metrics, and under several statistical settings.\\n\\nThe error metrics considered are the interpolation space norms $[H]^s$ for $0 \\\\leq s \\\\leq 1$, defined in Section 2.4. For $s=0$ this amounts to considering the estimator's $L^2$ reconstruction error, and for $s=1$ it amounts to the RKHS-norm reconstruction error.\", \"there_are_three_statistical_settings_considered\": [\"An abstract setting (section 3.1) where the only assumption is a condition referred to as \\\"assumption $(C)$\\\" in the paper. The following subsections make use of this abstract result.\", \"The classical setting (section 3.2) where dimension $d$ is constant and where the kernel $k$ has polynomially decaying eigenvalues.\", \"A high-dimensional setting (section 3.3) where sample size $n \\\\asymp d^\\\\gamma$ for some fixed $\\\\gamma>1$.\", \"Another contribution is a rigorous proof that the minimum $[H]^s$-norm reconstruction error admits a simplified expression for all $0 \\\\leq s \\\\leq 1$ (Theorem 2.5).\", \"A secondary contribution is the remark that, in the high-dimensional setting of section 3.3, the RKHS-norm reconstruction error _of any estimator_ does not vanish as $n \\\\to \\\\infty$, implying that this error metric is unsuitable in this setting. Moreover, high-dimensional kernel PCA is shown to exhibit a similar phenomenon as in high-dimensional kernel regression: the periodic plateau behavior.\"], \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"I am unable to assess the originality of this work w.r.t related literature, as I am not sufficiently familiar with this literature.\\n\\nIdentifying the important role played by \\\"assumption (C)\\\" in statistical analyses of kernel PCA is an interesting point.\\n\\nThe paper is very well structured and easy to follow.\", \"weaknesses\": \"In the proof of Theorem 2.5, I don't understand why the orthonormality constraint $(\\\\psi_1, ..., \\\\psi_\\\\ell) \\\\in B_\\\\ell$ does not appear in the stationarity condition of $\\\\mathcal{R}_s$, equation (6). In fact I did not manage to recover equation (6) at all. This step of the proof deserves more details.\\n\\nThe last sentence of the abstract is not very clear, and \\\"$[H]^1$ norm\\\" is not defined at that stage. Perhaps a more precise statement would be that \\\"the RKHS norm is not a relevant error metric in high dimensions\\\".\\n\\nThe figures are difficult to read, the paper would greatly benefit from making them bigger. (E.g with matplotlib, reduce figsize and increase dpi.)\", \"the_paper_contains_many_typos_and_unusual_wordings\": [\"line 17, in the abstract, remove space after opening parenthesis\", \"line 118, add \\\"In\\\", or use \\\"contain\\\"\", \"line 343, the statement of assumption (C), remove \\\"If\\\"\", \"line 376, replace \\\"interested\\\" by \\\"interesting\\\"\", \"throughout, consider using the word \\\"setting\\\" in place of \\\"circumstance\\\", which is less commonly used\", \"line 424, remove space after opening parenthesis\", \"line 470, add \\\"A\\\" at the beginning of the sentence\", \"line 531, replace \\\"decayed\\\" by \\\"decaying\\\", and \\\"provide\\\" by \\\"provided\\\"\", \"line 537, replace \\\"on\\\" by \\\"of\\\"\"], \"questions\": [\"Please address the missing step in the proof of Theorem 2.5 (see Weaknesses).\", \"In Corollary 3.4, what is the dependency of the constant $C_3$ on problem parameters? is it polynomial?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
5EuAMDMPRK
POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference Optimization
[ "Batuhan K. Karaman", "ishmam zabir", "Alon Benhaim", "Vishrav Chaudhary", "Mert R. Sabuncu", "Xia Song" ]
Balancing safety and usefulness in large language models has become a critical challenge in recent years. Models often exhibit unsafe behavior or adopt an overly cautious approach, leading to frequent overrefusal of benign prompts, which reduces their usefulness. Addressing these issues requires methods that maintain safety while avoiding overrefusal. In this work, we examine how the overgeneration of training data using advanced teacher models (e.g., GPT-4o), including responses to both general-purpose and toxic prompts, influences the safety and usefulness in instruction-following language models. Additionally, we present POROver, a strategy to use preference optimization methods in order to reduce overrefusal, via employing a superior teacher model's completions. Our results show that overgenerating completions for general-purpose prompts significantly enhances the model's safety and usefulness balance. Specifically, the F1 score calculated between safety and usefulness increases from 74.4\% to 91.8\% due to a substantial increase in safety. Moreover, overgeneration for toxic prompts substantially increases the usefulness from 11.1\% to 57.6\% while maintaining safety. Furthermore, preference optimization algorithms, when applied with carefully curated preference data, can effectively increase a model's usefulness from 57.6\% to 82.1\% while maintaining comparable safety levels.
[ "LLM safety", "LLM usefulness", "Overrefusal in LLMs", "responsible AI" ]
Reject
https://openreview.net/pdf?id=5EuAMDMPRK
https://openreview.net/forum?id=5EuAMDMPRK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "shFEtQCOWL", "oDfpHKUjIU", "mbMZPUx5LG", "k8ldmVZYPP", "cLmMjyTmzo", "bQNgiBgx06", "XCvDzDd7Gk", "VoTzTTyAfd", "Pcz5VSqOJ0", "OmIE9KqzC3", "MS1vkDtgIh", "F3qk2qUS2q", "DEOAKeN1yf", "CHaSmSLN23", "AizUgY2keX", "7SozDzU4U5", "7DMf8enmqx", "6lrj0igyAn", "558SYYHQaa", "3J5WuAWJLv", "2VdJIaJBOc", "0O4oqVRUxB" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review" ], "note_created": [ 1732809031087, 1732809125778, 1732796991638, 1731447548298, 1733096954051, 1733202449421, 1732797811589, 1733011396042, 1731039603187, 1733210783082, 1733213055826, 1732864026480, 1730708108405, 1733211656991, 1733097006043, 1732806858820, 1732796522033, 1732795114434, 1737524215785, 1732795847643, 1734330701143, 1730701848053 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_h5oh" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_h5oh" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_PDLs" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_RwoH" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_DgZU" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_PDLs" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12792/Authors" ], [ "ICLR.cc/2025/Conference/Submission12792/Area_Chair_iTsD" ], [ "ICLR.cc/2025/Conference/Submission12792/Reviewer_RwoH" ] ], "structured_content_str": [ "{\"comment\": \"**W4: There's no analysis of the consistency of results across multiple training runs or different random seeds.**\\n\\nWe acknowledge that the consistency analysis with multiple training runs and random seeds is critical for assessing the stability of the results. While we could not attend them due to resource and computational constraints, we have significantly expanded our experiments to cover two model families (Llama 3 and Phi-3) across various sizes. Our conclusions remain consistent across all model families and sizes we tested, providing robustness. \\n\\n**W4,Q4: Ethical implications of our work.**\\n\\nWe believe that achieving the maximum level of safety is crucial in all applications. At the same time, high safety should not come at the cost of excessive overrefusal, which unnecessarily restricts legitimate user interactions. Importantly, the inverse - sacrificing safety measures to increase user freedom - is not an acceptable solution, as it could lead to harmful outcomes. Our work is an effort to maintain robust safety guardrails while preserving user freedom for appropriate requests, without compromising either aspect. This is essential for developing AI systems that are both protective and practical - ensuring safety without defaulting to overly conservative responses that could diminish the models' utility and accessibility. We have added this discussion to the Ethical Statement section and clarified our terminology throughout the revised manuscript.\\n\\n**Q1: Can you provide more insight into why advanced teacher models require more safety examples? Is this related to the complexity of their responses or other factors?**\\n\\nThank you for this question. We think that the need for more safety examples stems from GPT-4o's more complex response patterns. GPT-4o generates noticeably longer and more complex responses compared to GPT-3.5, as shown in pendix C.1. This difference in response complexity may lead to nuanced safety signals during training. We have added these points to Section 4.1 in our revised manuscript. \\n\\n**Q3: Could you elaborate on how different rejection sampling criteria were selected? Were other criteria considered?**\\n\\nOur selection of rejection sampling criteria was primarily guided by their established presence in the existing literature. While we explored several potential criteria during our initial experimental design phase, we ultimately focused on the presented set due to their widespread adoption and empirical validation in the literature. We have included the relevant citations in our revised manuscript. In the future, exploring more methods and reward models may provide novel insights for safety and usefulness. We have added this point to the Limitations and Future Work section in our manuscript. \\n\\n**Q3: How sensitive are the results to the specific thresholds used in rejection sampling?**\", \"the_results_were_not_vastly_different_except_the_two_edge_values\": \"When \\u03c4 = 0 (meaning no toxic prompts in preference training set), the model's safety performance dropped significantly for small gains of usefulness. We hypothesize that this occurred because the primary training signal becomes unconditional compliance with all prompts without toxic examples in the training set. When \\u03c4 = 0.5, the model's usefulness stayed too low while high safety was maintained throughout the training. We have added this discussion to Section 4.2 in our revised manuscript.\\n\\n**Q5: How do the results of POROver compare to other existing methods for improving LLM safety and reducing overrefusal? Are there any specific scenarios where POROver outperforms or falls short of other approaches?** \\n\\nPOROver is specifically designed to target models that are already highly safe but exhibit high overrefusal rates. Its primary goal is to enhance usefulness without compromising the existing safety of the model. This positions POROver as a complementary approach to safety fine-tuning or alignment techniques, which often focus on improving safety at the potential expense of usefulness.\\nTo the best of our knowledge, no other post-training method specifically addresses reducing overrefusal while maintaining high safety in highly safe models. Methods like RLHF or other preference optimization algorithms focus broadly on alignment but may not explicitly mitigate overrefusal. POROver is novel in applying pairwise preference optimization to this specific problem using overgenerated data from superior teacher models.\\nRegarding limitations, POROver requires high-quality teacher model completions, which can be resource intensive. Additionally, while POROver shows promising results in reducing overrefusal in safe models, its application to less safe models remains unexplored. It is unclear whether POROver would offer advantages in such scenarios and investigating this is an interesting direction for future work.\"}", "{\"comment\": \"**Q6: Have you explored automated methods for tuning the containment threshold \\u03c4? Were other preference optimization methods considered besides DPO?**\\n\\nWe acknowledge that there are opportunities to further enhance POROver. Due to computational constraints, we have not explored automated tuning of the containment threshold or investigated alternative optimization methods. We believe these directions hold significant promise for future work. Specifically, incorporating automated hyperparameter tuning tools and exploring reference-free preference optimization methods could potentially lead to more efficient implementations than the current approach. We have included these directions in our Limitations and Future Work section.\\n\\n**Q6: How does the slight safety compromise in OR-Bench Toxic relate to the containment threshold?**\\n\\nOur grid search over different containment thresholds showed that each threshold value leads to a slightly different point in the safety-usefulness trade-off curve. Based on this empirical finding, we believe that our explored values of tau might be suboptimal. In fact, this is closely tied to the need to explore automated methods for tuning the containment threshold. We have added this point to the Limitations and Future Work Section of our revised manuscript.\\n\\nThank you very much for your valuable time and thoughtful review! We welcome any additional questions and are happy to provide further clarification.\"}", "{\"comment\": \"**Q1: Looking at Figure 5, it looks like there was little improvement in XSTest performance and a reduction in over-refusal for ORBench but no improvement in safety. Why do you think this is? Is it related to the training set being ORBench?**\\n\\nWe would like to note that we show an ablation analysis in Figure 5 in our revised manuscript. The before- and after-POROver results for Llama-3.1-8B are in Figure 4 and the results for Phi-3 are presented in Figure 10. Both figures demonstrate that POROver achieves its primary goal: reducing over-refusal while maintaining the model's existing safety levels.\\n\\nThe smaller gains in XSTest's Not-Overrefusal Rate compared to OR-Bench can be explained by ceiling effects - the base model was already performing well on XSTest (92.8% Not-Overrefusal Rate for Phi-3-7B), leaving limited room for improvement. We suspect that this is because XSTest is a smaller, older benchmark with less diversity. In contrast, Phi-3-7B started at just 54.8% Not-Overrefusal Rate on OR-Bench, giving us more headroom to demonstrate improvement - which we achieved by reaching 85%. We note that we observe consistent improvements across all tested Llama models, confirming that these patterns are not specific to any single model architecture. \\n\\nWe have revised Section 4.2 to communicate these points more clearly.\\n\\nThank you very much for your valuable time and thoughtful review! We welcome any additional questions and are happy to provide further clarification.\"}", "{\"summary\": \"The authors present a framework that aims to reduce overrefusal in Large Language Models (LLMs), while improving their safety. It involves finetuning on overgenerated training data from teacher models, such as GPT-4o, and preference optimization to guide models to respond in benign (but possibly seemingly toxic) prompts. Through experiments on Phi-3 7B, and various teacher models, the authors find that their method achieves significant reduction in overrefusal, while maintaining a balance between usefulness and safety.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles an important aspect of LLMs, aiming to investigate and improve their tradeoff in mainting (or even improving) their safety, without, however, undermining their usefulness due to overrefusal.\", \"The experiments presented are extensive; the effectiveness of the presented method has been evaluated on a variety of datasets and benchmarks related to safety and overrefusal.\", \"The experiments suggest that the proposed framework effectively results in a balance between usefulness and safety, without significantly undermining the general capabilities of the tested model.\"], \"weaknesses\": [\"As the authors acknowledge, a limitation of their study is that the proposed framework is only tested on a single model family and size (e.g., Phi-3 7B). In my opinion, while the results are promising, this limitation is significant; given that the framework relies on finetuning and preference optimization of pretrained models, testing it across diverse model families and scales would prove its effectiveness and generality. It is unclear to me whether the results would be similar in that case.\", \"Adding more fine grained experiments on the number of Added Safety Data (ASD) would make the claim that **the proposed method is effective without undermining the general abilities of the tested model** more convincing.\"], \"questions\": [\"Although experiments on models with different scales were not included, how do you expect the models to behave, assuming that they come from the same family? Would the benefits saturate as the number of parameters increases?\", \"How sensitive is the proposed method in the choice of the hyperparameter $\\\\tau$? Were the results vastly different accross your grid search?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer PDLs,\\n\\nThank you again for your time! Since the discussion period ends tomorrow, we just wanted to see if our response has clarified your questions. We hope you would consider increasing your score if we have answered your questions. Please let us know if you have additional comments and we are happy to follow up. Thanks!\"}", "{\"comment\": \"We thank the reviewer for the valuable comments and feedback. We are happy to know that most of your concerns are addressed. We will consider their formatting suggestions in our final manuscript. As a note about the larger models, we expect the benefits of our methods to diminish as model size approaches that of the teacher models, since these larger models typically exhibit fewer safety and overrefusal issues. Our choice of model sizes aligns with standard practice in alignment research, where evaluation commonly focuses on 3B-8B models. While we acknowledge that testing on 70B+ models would provide additional insights, the computational costs and resource requirements make this impractical for our current study.\\n\\nThank you very much for your valuable time and thoughtful review! We welcome any additional questions and are happy to provide further clarification.\"}", "{\"comment\": \"We thank the reviewer for their review and comments.\\n\\nWe will answer the points raised individually.\\n\\n**W1, Q1: Exploration of different model families.**\\n\\nWe agree that experimenting with different model families and sizes is crucial to achieve robust and convincing results. In response to this feedback, we have expanded our experiments to cover multiple model families and sizes. Our revised manuscript includes results for Llama-3.1-8B, Llama-3.2-3B, and Phi-3-7B. We present the results for Llama-3.1-8B in the main text and share the results of Llama-3.2-3B and Phi-3-7B in the Appendix D.3 and D.2, respectively. Given the established limitations of LLaMA models in terms of safety and usefulness [1][2][3], we strategically focused on using them as student models rather than teachers.\\n\\nAbout the robustness of our results, while we observed subtle variations in the exact Not-Unsafe and Not-Overrefusal Rates across different student models during instruction finetuning, the comparative trends between using older and newer teachers remained consistent. In addition, PORover effectively reduced overrefusal while maintaining safety across all tested models. Therefore, our conclusions remain consistent across all models we tested.\\n\\n[1] Bianchi, Federico, et al. \\u201cSafety-Tuned LLaMAs: Lessons from Improving the Safety of Large Language Models That Follow Instructions.\\u201d OpenReview, 2024, openreview.net/forum?id=gT5hALch9z.\\n\\n[2] Cui, Justin, et al. \\u201cOR-Bench: An Over-Refusal Benchmark for Large Language Models.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.20947.\\n\\n[3] R\\u00f6ttger, Paul, et al. \\u201cXSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models.\\u201d ArXiv.org, 2023, arxiv.org/abs/2308.01263.\\n\\n\\n**W2: Added Safety Data (ASD) is only evaluated at three levels 0, 2K, and 20K. More data would be needed to defend the claim that there is a tradeoff between ASD and safety. I would expect it to saturate based on the amount of base diversity represented in the prompts.**\\n\\nWe agree that a fine-grained experiment would make the claim more convincing. We have expanded our initial ASD grid of {0, 2k, 20k} to {0, 2k, **5k**, **10k**, **15k**, 20k} for Llama-3.1-8B and evaluated its safety in our revised manuscript. As shown in Figure 5, the safety of the models increases with more ASD, and we see a saturation especially after 10k ASD, as you expected. \\nAdditionally, [1] has previously conducted fine-grained ASD analysis and showed the tradeoff between ASD and safety. We have added a citation to [1] to the related discussion in Section 4.3 our revised manuscript. \\n\\n[1] Bianchi, Federico, et al. \\u201cSafety-Tuned LLaMAs: Lessons from Improving the Safety of Large Language Models That Follow Instructions.\\u201d OpenReview, 2024, openreview.net/forum?id=gT5hALch9z.\\n\\n**W3: The figures have misleading axis starting at 85% to 100% for instance in Figure 4. This makes the difference look bigger than it is.**\\n\\nThank you for pointing this out. We have replaced the figure causing confusion with a table (Table 2) in our revised manuscript. We have also revised the discussion in Section 4.1 to clearly communicate that the results are similar in Table 2.\\n\\n**Q2: Is it possible that Phi-3 already has safety training that's particularly prone to over-refusing?**\\n\\nIn our revised manuscript, we investigate both Phi-3 and the Llama family of models. Our expanded analysis shows similar overrefusal trends across these model families, suggesting this behavior is not unique to Phi-3.\\n\\n**Q3: I interpret ASD to be the number of datapoints added after rejection sampling. Is this correct? If so, this correlates with the extent the model is deviated from the base model.**\\n \\nYes, we define ASD as the number of toxic prompts added to the training set. The completions for these toxic prompts are overgenerated with GPT-4o and then rejection sampled. We agree that rejection sampling is a determining factor on how much the model is deviated from the base model. \\n\\n**Q4: It's not clear where 15%, 45.5%, and 95.5% come from in the abstract.**\\n\\nThank you for pointing this out. We have changed those overrefusal rates to Not-Overrefusal Rates (of Llama-3.1-8B) in our revised manuscript to increase the connection between the Abstract and the Results section.\\n\\nThank you very much for your time and review! We would greatly appreciate any additional questions and are happy to provide further clarification.\"}", "{\"comment\": \"Thank you for your response.\\nYour additional experiments and discussion address most of my major concerns. \\nThus, I have increased my overall score, and my soundness and presentation scores.\", \"below_are_some_minor_additional_comments\": [\"After moving the Phi-3 results to the appendices, I noticed a few minor figure reference errors in your draft. For example, around line ~466, it seems you should reference Figure 4 instead of Figure 11.\", \"I suggest including the model\\u2019s name (Llama-3.1 8B) in all figures in the main text, as you have done in the appendices, to improve clarity and make the paper easier to follow.\"]}", "{\"summary\": \"This paper is concerned with training language models that output safe content but do not refuse too often. They test two algorithmic techniques to achieve this. First, they use overgeneration, which involves sampling multiple possible outputs and choosing the best responses for training. Second, they generate preference data pairs, based on responses that were unsafe/over-refusal vs not.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper tackles an important problem of making models safer.\\n2. The paper evaluates on multiple benchmarks with different step counts to give broader analysis.\\n3. The paper's algorithm seems straightforward to implement.\", \"weaknesses\": \"1. I believe the algorithms in this work have limited novelty. Rejection sampling and preference optimization are some of the most used tools for current fine-tuning and safety alignment, so the paper needs to provide novel analysis instead.\\n2. I'm confused about the empirical gains. It seems that in Table 1, the random selection GPT-4o baseline performs on par with the rejection sampling, indicating that the filtering step is not that crucial. Moreover, in Figures 3 and 4, training on GPT-3 seems to be extremely safe (though it does not solve over-refual). \\n\\nIn general, I suggest focusing on key empirical takeaways, ensuring that POROver improves upon simple baselines, and organizing the presentation of the results.\", \"questions\": \"1. Looking at Figure 5, it looks like there was little improvement on XSTest performance and a reduction in over-refusal for ORBench but no improvement in safety. Why do you think this is? Is it related to the training set being ORBench?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the valuable comments and feedback. We are happy to know that most of your concerns are addressed. We address your remaining concerns as follows:\\n\\n**Concern 1.** We expect the benefits of our methods to diminish as model size approaches that of the teacher models, since these larger models typically exhibit fewer safety and overrefusal issues. Our choice of model sizes aligns with standard practice in alignment research, where evaluation commonly focuses on 3B-8B models. While we acknowledge that testing on 70B+ models would provide additional insights, the computational costs and resource requirements make this impractical for our current study.\\n\\n**Concerns 2. and 4.** While our models demonstrate significantly improved safety during typical usage patterns, we acknowledge important limitations. Our experimental results show effective generalization across diverse instruction sets, and we contribute novel insights through evaluation on previously understudied overrefusal benchmarks. However, the models may still be vulnerable to edge cases involving unusual prompts, adversarial attacks, and jailbreaking attempts.\\nOur work establishes a foundational safety layer that balances safety with usefulness. We recognize that determined users might bypass these safeguards, yet maintaining strong safety defaults remains crucial for many user-facing applications. Our approach creates meaningful friction against misuse while preserving the model's intended functionality.\\n\\n**Concern 3.** Given that we are using well-established training algorithms (SFT and DPO), we do not analyze their computational costs in our work. We believe that providing the relevant API costs and dataset sizes is sufficient for users to assess the feasibility of our approach for their use cases. We also note that the converge times do not vary between different training datasets, which would simplify their analysis. \\n\\nThank you very much for your valuable time and thoughtful review! We welcome any additional questions and are happy to provide further clarification.\"}", "{\"comment\": \"We thank the reviewer for the valuable comments and feedback. We are happy to know that our revised manuscript is helpful. We address your remaining concerns as follows:\\n\\n**1. Method is not conceptually different from other fine-tuning algorithms:** \\n\\nWe acknowledge the importance of situating POROver within the context of existing approaches like Constitutional AI, RLAIF, and preference optimization methods.\\n\\n\\u2022 Constitutional AI: While Constitutional AI uses a predefined set of principles to critique and refine outputs, POROver focuses on explicit overrefusal reduction using pairwise preference data. Unlike Constitutional AI, which relies on rule-based critiques, POROver targets the trade-off between safety and usefulness by leveraging advanced teacher model completions.\\n\\n\\u2022 RLAIF: POROver differs from RLAIF by avoiding the computational overhead of reinforcement learning and reward modeling. Instead, it uses DPO with curated preference data, simplifying the alignment process while achieving similar benefits.\\n\\n\\u2022 Preference Optimization: POROver innovates on standard DPO by generating preference datasets specifically for reducing overrefusal. This involves leveraging advanced safety scoring methods (e.g., Llama Guard 2) to improve usefulness while maintaining safety. By explicitly targeting overrefusal reduction, POROver fills a gap in existing methods that often prioritize safety at the cost of usefulness. We believe this demonstrates its complementary and novel contribution to the field.\\n\\n**2. The gains/scope are somewhat limited.**\", \"porover_addresses_an_under_explored_but_significant_challenge\": \"reducing overrefusal in models that are already highly safe. This issue affects many prominent models including Claude-3, Gemini-1.5, Llama-2, and Llama-3. While existing safety fine-tuning and alignment techniques often improve safety at the cost of increased overrefusal, POROver takes a complementary approach by optimizing model usefulness while maintaining safety levels. Given the widespread occurrence of overrefusal across major language models, our method has broad practical applications.\\n\\nThank you very much for your valuable time and thoughtful response! We welcome any additional questions and are happy to provide further clarification.\"}", "{\"comment\": \"Thank you for your detailed responses and revisions to address the concerns raised. While I appreciate the effort to improve the manuscript, I will maintain my current scoring due to the following key concerns:\\n\\n1. Although the inclusion of multiple model families and sizes improves robustness, the lack of experiments with larger models (e.g., 70B+) significantly limits the generalizability of your conclusions. Understanding scaling behavior is crucial in assessing the broader applicability of your method, and this remains unexplored.\\n\\n2. Expanding evaluations to additional datasets is noted, but the paper still does not adequately address the need for benchmarks featuring diverse languages, cultures, and domains. This limitation makes it difficult to assess the true real-world utility and robustness of the proposed method.\\n\\n3. While you provide an estimate of GPT-4's API cost and training convergence times, the lack of detailed computational analysis still hinders practitioners' ability to evaluate the feasibility of implementing this approach in production.\\n\\n4. The absence of experiments on adversarial scenarios or consistency across training runs remains a significant gap. These analyses are critical for evaluating the reliability and stability of the proposed method in practical applications.\\n\\nThese points highlight critical areas where the paper still falls short, and I encourage further exploration and clarification in these aspects to strengthen the work. Thank you for addressing the other points comprehensively.\"}", "{\"summary\": \"This paper studies the impact of rejection sampling for safety training on the model's tendency to over-reject. The results show in the student teacher setting distilling from a stronger model like GPT-4 to Phi-3 the over-refusal reduces from near 100% to 45% on OR-bench.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1) The approach shows strong empirical improvement and scopes a relevant problem of over-refusing.\\n\\n2) Although the results are shown only on the 7B Phi-3 model, it's done on a variety of seemingly-toxic datasets. \\n\\n3) The results are supported by human annotations in appendix C.\\n\\n4) The paper speaks to the trade off on the amount of safety training data needed to achieve the level of desired safety. This is defined in terms of additional safety datapoints.\", \"weaknesses\": \"1) The approach involves distilling from already safety trained models. In particular, safety trained models that are also likely targeting similar datasets. The work shows gpt3.5 vs gpt4, but it would be more convincing to show Llama-3 as the teachers also, or an somewhat unsafe teacher model.\\n\\n2) Added Safety Data (ASD) is only evaluated at three levels 0, 2K, and 20K. More data would be needed to defend the claim that there is a tradeoff between ASD and safety. I would expect it to saturate based on the amount of base diversity represented in the prompts. \\n\\n3) The figures have misleading axis starting at 85% to 100% for instance in Figure 4. This makes the difference look bigger than it is.\", \"questions\": \"1) Do these results replicate on a different model other than Phi-3? It would be especially convincing if it replicates on different sizes of the llama-3 family of models.\\n\\n2) Is it possible that Phi-3 already has safety training that's particularly prone to over-refusing?\\n\\n3) I interpret ASD to be the number of datapoints added after rejection sampling. Is this correct? If so, this correlates with the extent the model is deviated from the base model.\\n\\n4) It's not clear where 15%, 45.5%, and 95.5% come from in the abstract.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I find the clarification of the work's intention to be \\\"reducing over-refual while preserving original safety capabilities\\\" helpful, since it seems like there aren't strong improvements. I also understand the novelty in the problem space of reducing refusals. I increase my score to 5 since I still feel like the method is not conceptually different from other fine-tuning algorithms and that the gains/scope are somewhat limited.\"}", "{\"comment\": \"Dear reviewer DgZU,\\n\\nThank you again for your time! Since the discussion period ends tomorrow, we just wanted to see if our response has clarified your questions. We hope you would consider increasing your score if we have answered your questions. Please let us know if you have additional comments and we are happy to follow up. Thanks!\"}", "{\"comment\": \"We thank the reviewer for their review and comments.\\n\\nWe will answer the points raised individually.\\n\\n**W1, Q2: Exploration of different model families and sizes.**\\n\\nWe agree that experimenting with different model families and sizes is crucial to achieve robust and convincing results. In response to this feedback, we have expanded our experiments to cover multiple model families and sizes. Our revised manuscript includes results for Llama-3.1-8B, Llama-3.2-3B, and Phi-3-7B. We present the results for Llama-3.1-8B in the main text and share the results of Phi-3-7B and Llama-3.2-3B in Appendix D2 and D3, respectively. \\n\\nWhile we observed subtle variations in the exact Not-Unsafe and Not-Overrefusal Rates across models during instruction finetuning, the comparative trends between older and newer teachers remained consistent. In addition, PORover effectively reduced overrefusal while maintaining safety across all tested models. Therefore, our conclusions remain consistent across all models we tested.\\n\\nWe were not able to conduct experiments on models larger than 8B parameters due to computational resource constraints. We leave the exploration of scaling behavior with larger models as future work. In our revised manuscript, we have included this point in the Limitations and Future Work section.\\n\\n**W1: The paper's evaluation methodology relies heavily on automatic metrics and a limited set of benchmarks.**\\n\\nWe thank the reviewer for this observation. Our evaluation prioritized XS-Test for human evaluation due to the large number of models being compared. While OR-Bench and XS-Test are the most relevant benchmarks for assessing overrefusal and safety, we acknowledge they may not capture all aspects. We have expanded our safety evaluation to additional datasets beyond XSTest and Or-Bench, though resource constraints prevented us from exploring benchmarks with higher variability such as different languages, cultures, and domain specific contexts.\\n\\n**W2: The resource requirements for overgeneration with GPT-4o as well as the training efficiency are absent.**\\n\\nThank you for pointing this out. We have relied on GPT-4o\\u2019s API in order to perform overgeneration. GPT-4o\\u2019s API is roughly five times more expensive than GPT-3.5 (our baseline model family) according to [1]. We have added this discussion to Appendix C of our revised manuscript. \\n\\nRegarding the training costs, convergence times remained similar across different instruction finetuning datasets for the same base models. We have included this point to Appendix B of our revised manuscript.\\n\\n\\n[1] OpenAI. \\u201cPricing.\\u201d OpenAI, 2024, openai.com/api/pricing/.\\n\\n\\n**W3: The paper lacks a thorough comparison with existing safety and overrefusal reduction methods. While baseline comparisons are provided, the authors don't fully contextualize their results within the broader landscape of recent work on LLM safety alignment.**\\n\\nWe acknowledge the importance of situating our method, POROver, within the context of existing approaches like Constitutional AI, RLAIF, and preference optimization methods.\\n\\n\\u2022\\tConstitutional AI: While Constitutional AI uses a predefined set of principles to critique and refine outputs, POROver focuses on explicit overrefusal reduction using pairwise preference data. Unlike Constitutional AI, which relies on rule-based critiques, POROver targets the trade-off between safety and usefulness by leveraging advanced teacher model completions.\\n\\n\\u2022\\tRLAIF: POROver differs from RLAIF by avoiding the computational overhead of reinforcement learning and reward modeling. Instead, it uses DPO with curated preference data, simplifying the alignment process while achieving similar benefits.\\n\\n\\u2022\\tPreference Optimization: POROver innovates on standard DPO by generating preference datasets specifically for reducing overrefusal. This involves leveraging advanced safety scoring methods (e.g., Llama Guard 2) to improve usefulness while maintaining safety.\\nBy explicitly targeting overrefusal reduction, POROver fills a gap in existing methods that often prioritize safety at the cost of usefulness. We believe this demonstrates its complementary and novel contribution to the field.\\n\\n**W4: The paper doesn't examine how the method performs under adversarial conditions or when faced with edge cases.**\\n\\nWhile our models show major safety improvements for typical usage, they are not completely safe. Our benchmark results demonstrate good generalization, but the models may still be vulnerable to adversarial attacks and jailbreaking. Our work focuses on creating a layer of safety while maintaining usefulness. Although users might find ways to bypass safety measures, we believe that establishing safe behavior as the default is essential for user-facing applications. We have added these points to the Limitations and Future Work Section in our revised manuscript.\"}", "{\"comment\": \"**W2: I'm confused about the empirical gains. It seems that in Table 1, the random selection GPT-4o baseline performs on par with the rejection sampling, indicating that the filtering step is not that crucial.**\\n\\nThank you for this observation. We would like to first note that we have expanded our experiments to cover multiple model families and sizes to enhance the robustness and generalizability of our findings. Our revised manuscript includes results for Llama-3.1-8B, Llama-3.2-3B, and Phi-3-7B. We present the results for Llama-3.1-8B in the main text and share the results of Phi-3-7B and Llama-3.2-3B in Appendix D2 and D3, respectively since our findings remain consistent. Therefore, Table 1 shows the results for Llama-3.1-8B in our revied manuscript, and Phi-3-7B results are in Table 6.\\n\\nWhile random selection and rejection sampling may appear similar at first glance, our results reveal that rejection sampling effectively identifies safer operating points while preserving model usefulness, avoiding unnecessary trade-offs between safety and usefulness. For instance, in OR-Bench, when using the ArmoRM helpfulness criterion:\\n\\n1.\\tPhi-3-7B's F1-score on improves by 2.75%, driven by enhancements in both Not-Unsafe Rate and Not-Overrefusal Rate (Table 6)\\n\\n2.\\tLlama-3.1-8B's F1-score increases by 1.02% while its safety increases by 5.65% (Table 1)\\n\\n3.\\tLlama-3.2-3B shows a 0.51% improvement in F1-score while its safety increases by 1.07% (Table 7)\\n\\nThese consistent improvements across different models support our findings. We have elaborated on these points in Section 4.1 in our revised manuscript and added a detailed discussion in Appendix E.\\n\\n**W2: Moreover, in Figures 3 and 4, training on GPT-3.5 seems to be extremely safe (though it does not solve over-refusal).**\\n\\nThank you for this observation. The extremely safe but overrefusing behavior of GPT-3.5 highlights the key weakness of older teacher models. Training on GPT-3.5 gives high safety with the cost of a high overrefusal. However, training on GPT-4o achieves the same high safety level with much less overrefusal. Therefore, we can conclude that using better teacher models effectively reduces the development of overrefusal during safety finetuning. \\n\\nWe have revised the discussion of these findings in Section 4.1 to communicate these points more clearly.\\n\\n**W3: Ensuring that POROver improves upon simple baselines.**\\n\\nWe appreciate the concern about baseline comparisons. As we have elaborated in our reply to Weakness 1, we are not aware of any prior work specifically targeting over-refusal reduction in already-safe LLMs \\u2013 we present this as novel problem space distinct from general safety improvements. Our baselines are before-POROver versions of each tested model, which allows us to directly measure the impact of our method.\\nOur experiments now include comprehensive evaluations across multiple models (Llama-3.1-8B, Llama-3.2-3B, and Phi-3-7B), demonstrating consistent improvements. Speicifically, in OR-Bench: \\n\\n1.\\tLlama-3.1-8B improves its usefulness from 57.6% to 82.1%.\\n2.\\tPhi-3-7B improves its usefulness from 54.8% to 85%.\\n3.\\tLlama-3.2-1B improves its usefulness from 54.8% to 83.4%.\\n\\nAll models achieve these improvements with less than 1% drop in their safety. We believe these results effectively demonstrate POROver's value compared to the baseline performance of unmodified models.\\n\\n**W3: Focusing on key empirical takeaways and and organizing the presentation of the results.**\\n\\nThank you for raising this point. We have reorganized our Results section to highlight the key findings more clearly. We have also improved our discussions throughout the manuscript to better explain our main insights and empirical results.\"}", "{\"comment\": \"We thank the reviewer for their review and comments.\\n\\nWe will answer the points raised individually.\\n\\n**W1, Q1: Exploring multiple model families and sizes.**\\n\\nWe agree that experimenting with different model families and sizes is crucial to achieve robust and convincing results. In response to this feedback, we have expanded our experiments to cover multiple model families and sizes. Our revised manuscript includes results for Llama-3.1-8B, Llama-3.2-3B, and Phi-3-7B. We present the results for Llama-3.1-8B in the main text and share the results of Phi-3-7B and Llama-3.2-3B in Appendix D2 and D3, respectively. \\n\\nWhile we observed subtle variations in the exact Not-Unsafe and Not-Overrefusal Rates across models during instruction finetuning, the comparative trends between older and newer teachers remained consistent. In addition, PORover effectively reduced overrefusal while maintaining safety across all tested models. Therefore, our conclusions remain consistent across all models we tested.\\n\\nWe were not able to conduct experiments on models larger than 8B parameters due to computational resource constraints. We leave the exploration of scaling behavior with larger models as future work. In our revised manuscript, we have included this point in the Limitations and Future Work section.\\n\\n**W2: Adding more fine-grained experiments on the number of Added Safety Data (ASD) would make the claim that the proposed method is effective without undermining the general abilities of the tested model more convincing.**\\n\\nWe agree that a fine-grained experiment would make the claim more convincing. We have expanded our initial ASD grid of {0, 2000, 20000} to {0, 2000, **5000**, **10000**, **15000**, 20000} for Llama-3.1-8B and evaluated its general capabilities in our revised manuscript. As shown in Figure 6, the general capabilities of the models remained consistent. \\nAdditionally, [1] has previously conducted a similar fine-grained ASD analysis and discussed that the general capabilities don\\u2019t get affected by ASD up to a certain ASD level. Our results show that we were within that acceptable range in our experiments. We have added a citation to [1] to the related discussion in Section 4.3 of our revised manuscript.\\n\\n[1] Bianchi, Federico, et al. \\u201cSafety-Tuned LLaMAs: Lessons from Improving the Safety of Large Language Models That Follow Instructions.\\u201d OpenReview, 2024, openreview.net/forum?id=gT5hALch9z.\\n\\n**Q2: How sensitive is the proposed method in the choice of the hyperparameter? Were the results vastly different across your grid search?**\", \"the_results_were_not_vastly_different_except_the_two_edge_values\": \"When \\u03c4 = 0 (meaning no toxic prompts in preference training set), the model's safety performance dropped significantly for small gains of usefulness. We hypothesize that this occurred because the primary training signal becomes unconditional compliance with all prompts without toxic examples in the training set. When \\u03c4 = 0.5, the model's usefulness stayed too low while high safety was maintained throughout the training. We have added this discussion to Section 4.2 in our revised manuscript.\\n\\nThank you very much for your time and review! We would greatly appreciate any additional questions and are happy to provide further clarification.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewer for their review and comments.\\n\\nWe will answer the points raised individually.\\n\\n**W1: I believe the algorithms in this work have limited novelty. Rejection sampling and preference optimization are some of the most used tools for current fine-tuning and safety alignment, so the paper needs to provide novel analysis instead.**\\n\\nThank you for pointing this out. While we acknowledge that rejection sampling and preference optimization are commonly used in other domains, our goal is investigating their use to obtain novel insights about safety and usefulness. To the best of our knowledge, no other work has presented a quantitative and systematic analysis of safety and usefulness when comparing older and newer, more superior teacher models during instruction fine-tuning with general-purpose and toxic prompts. Additionally, we are not aware of any work in the literature that specifically targets over-refusal reduction while maintaining safety. We have revised the Introduction section in our manuscript and added the following motivations and clarifications about our novelties:\\n\\n1.\\tWhile using advanced teacher models (e.g., GPT-4o) for instruction fine-tuning with general-purpose prompts is known to enhance a student's general capabilities, we systematically analyze its previously unexplored impact on safety and usefulness. We find that it significantly enhances the model's safety and usefulness balance. Our models demonstrate significantly increased safety with only a modest reduction in usefulness.\\n2.\\tThe few available open-source instruction fine-tuning datasets containing toxic prompts present a significant challenge: they lead to high overrefusal in trained models. Models trained on these datasets tend to develop significant overrefusal in their attempt to achieve the highest safety levels [1]. Notably, these datasets were generated using older models like GPT-3.5 as teachers. In our work, we investigate the impact of using more recent, advanced models to generate training data for safety finetuning. Our results reveal that models trained with completions generated by superior teacher models develop significantly less overrefusal. However, obtaining high safety levels with superior teacher models requires larger training datasets, revealing a previously undocumented trade-off in safety assurance and data efficiency.\\n3.\\tThere are numerous recent LLMs that are highly safe but exhibit high overrefusal, including Claude-3, Gemini-1.5, Llama-2, and Llama-3 [2][3]. While this behavior may stem from conservative safety filtering during training, the exact mechanisms remain unclear due to the proprietary nature of training datasets and procedures. In scenarios where a model is highly safe but exhibits high overrefusal, the goal becomes reducing over-refusal while maintaining the high safety level. We introduce POROver to specifically target this scenario. To the best of our knowledge, our paper is the first to address the goal of reducing overrefusal while maintaining high safety, and to evaluate the use of preference optimization methods specifically targeted to this goal.\\n\\nWe believe that these contributions, all supported by quantitative evidence through standard open-source benchmarks, provide a comprehensive framework for understanding and optimizing safety and usefulness in language models, establishing novel insights. Given the critical importance of safety, we have also taken steps to address the lack of high-quality open-source finetuning datasets by releasing all the data we generated with GPT-4o.\\n\\n\\n[1] Bianchi, Federico, et al. \\u201cSafety-Tuned LLaMAs: Lessons from Improving the Safety of Large Language Models That Follow Instructions.\\u201d OpenReview, 2024, openreview.net/forum?id=gT5hALch9z.\\n\\n[2] Cui, Justin, et al. \\u201cOR-Bench: An Over-Refusal Benchmark for Large Language Models.\\u201d ArXiv.org, 2024, arxiv.org/abs/2405.20947.\\n\\n[3] R\\u00f6ttger, Paul, et al. \\u201cXSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models.\\u201d ArXiv.org, 2023, arxiv.org/abs/2308.01263.\"}", "{\"metareview\": \"This paper aims to understand the beenefits of over generation, and forms of finetuning/rejection sampling to improve usefulness/safety tradeoffs in LLMs. While authors agreed that this is an important direction of study, many felt that it was somewhat incremental compared to prior work. Initial experiments focused only on the Phi models, and while later experiments include Llama models, I am still inclined to hold the limited scope against the authors.\\n\\nSome reviewers brought up concerns that the authors looked at models with too few parameters. While indeed this may affect validity of the finding, enforcing stringent guidelines on model size for submissions unfairly disadvantages researchers with smaller compute budgets. Nevertheless, given that the contribution of the paper is on the evaluation side more than novelty, I believe that the authors do need to be more extensive and detailed in their experimentation. From my own reading, I also found it difficult to discern their precise methodology from reading the submission, and there was no pseudocode to be found. Ultimately, with more novelty, more breadth, and improved presentation, this work could be a nice contribution. But where it stands, I believe the work has room to improve before meriting acceptance.\", \"additional_comments_on_reviewer_discussion\": \"While reviewer discussion was less thorough than I would have appreciated, the most vocal reviewer advocated for rejection. Their concerns included small model sizes and lack of culture/language diversity (I think this is acceptable if we wish to allow for compute-restricted academic research budgets), but the concerns about poor method documentation and lack of robustness to adversarial attacks were compelling.\"}", "{\"summary\": \"The paper titled \\\"POROVER: IMPROVING SAFETY AND REDUCING OVERREFUSAL IN LARGE LANGUAGE MODELS WITH OVERGENERATION AND PREFERENCE OPTIMIZATION\\\" presents a comprehensive study on enhancing the safety and reducing overrefusal in large language models (LLMs). The authors examine the impact of overgenerating training data using advanced teacher models on the safety and usefulness balance of instruction-following language models. They introduce POROver, a strategy that employs preference optimization methods to reduce overrefusal by leveraging completions from superior teacher models. The study demonstrates significant improvements in the F1 score between safety and usefulness, and a substantial reduction in overrefusal rates.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a novel approach to reducing over refusal in LLMs through overgeneration and preference optimization, which is a creative solution to a common problem in the field. The paper is well-written and the results are clearly presented, making it easy to follow the authors' reasoning and findings. The work addresses a critical issue in the deployment of LLMs, improving their safety without compromising their usefulness, which has significant implications for real-world applications.\", \"weaknesses\": [\"The paper primarily focuses on a single model size and family (Phi-3), which limits the generalizability of the findings. While the authors acknowledge this limitation, the lack of experimentation with different model scales makes it difficult to understand how these methods would perform across the spectrum of model sizes. This is particularly important given that safety and overrefusal behaviors often vary significantly with model scale. Including experiments with both smaller (3-4B) and larger (70B+) models would provide stronger evidence for the method's broad applicability. The paper's evaluation methodology relies heavily on automatic metrics and a limited set of benchmarks. While the chosen benchmarks (e.g., OR-Bench, XSTest) are relevant, they may not capture the full spectrum of real-world scenarios where safety and overrefusal matter. Including evaluations on more diverse datasets, particularly those featuring different languages, cultures, and domain-specific contexts, would strengthen the paper's conclusions about the method's effectiveness.\", \"The computational analysis of the proposed methods is notably absent from the paper. The overgeneration approach with GPT-4 as a teacher model likely incurs significant computational costs, yet there's no discussion of the training efficiency or resource requirements. This omission makes it difficult for practitioners to assess the method's feasibility in production environments. A detailed analysis of computational overhead compared to standard fine-tuning approaches would be valuable.\", \"The paper lacks a thorough comparison with existing safety and overrefusal reduction methods. While baseline comparisons are provided, the authors don't fully contextualize their results within the broader landscape of recent work on LLM safety alignment. A more comprehensive comparison with methods like constitutional AI, RLAIF, and other preference optimization approaches would better demonstrate the advancement over state-of-the-art.\", \"The robustness of the proposed method requires more thorough investigation. The paper doesn't examine how the method performs under adversarial conditions or when faced with edge cases. Additionally, there's no analysis of the consistency of results across multiple training runs or different random seeds. This makes it difficult to assess the reliability and stability of the approach in practice. The ethical implications of reducing overrefusal deserve deeper examination. While the paper successfully demonstrates technical improvements in reducing overrefusal, it doesn't adequately address the broader implications of making models more compliant.\"], \"questions\": \"1. Can you provide more insight into why advanced teacher models require more safety examples? Is this related to the complexity of their responses or other factors?\\n\\n2. How do you expect the observed trends to scale with different model sizes? Would smaller or larger models show similar patterns?\\n\\n3. Could you elaborate on how different rejection sampling criteria were selected? Were other criteria considered?\\n\\n How sensitive are the results to the specific thresholds used in rejection sampling?\\n\\n4. Could the authors expand on the ethical implications of their work, particularly regarding the balance between user freedom and model safety?\\n\\n5. How do the results of POROver compare to other existing methods for improving LLM safety and reducing overrefusal? Are there any specific scenarios where POROver outperforms or falls short of other approaches?\\n\\n6. Have you explored automated methods for tuning the containment threshold \\u03c4?\\n\\n Were other preference optimization methods considered besides DPO?\\n\\n How does the slight safety compromise in OR-Bench Toxic relate to the containment threshold?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5ECUAQJUuq
AdvLoRA: Adversarial Low-Rank Adaptation of Vision-Language Models
[ "Yuheng Ji", "Yue Liu", "Zhicheng Zhang", "Zhao Zhang", "Yuting Zhao", "Gang Zhou", "Xingwei Zhang", "Xinwang Liu", "Xiaolong Zheng" ]
Vision-Language Models (VLMs) are a significant technique for Artificial General Intelligence (AGI). With the fast growth of AGI, the security problem become one of the most important challenges for VLMs. In this paper, through extensive experiments, we demonstrate the vulnerability of the conventional adaptation methods for VLMs, which may bring significant security risks. In addition, as the size of the VLMs increases, performing conventional adversarial adaptation techniques on VLMs results in high computational costs. To solve these problems, we propose a parameter-efficient \underline{Adv}ersarial adaptation method named \underline{AdvLoRA} by \underline{Lo}w-\underline{R}ank \underline{A}daptation. At first, we investigate and reveal the intrinsic low-rank property during the adversarial adaptation for VLMs. Different from LoRA, we improve the efficiency and robustness of adversarial adaptation by designing a novel reparameterizing method based on parameter clustering and parameter alignment. In addition, an adaptive parameter update strategy is proposed to further improve the robustness. By these settings, our proposed AdvLoRA alleviates the model security and high resource waste problems. Extensive experiments demonstrate the effectiveness and efficiency of the AdvLoRA.
[ "Vision-Language Models", "Adversarial Training", "Parameter-efficient Adaptation" ]
https://openreview.net/pdf?id=5ECUAQJUuq
https://openreview.net/forum?id=5ECUAQJUuq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "de6ZqpAAnH", "UvcfbnXHoz", "JDuOxc3izf", "EO29gG8BQD", "Bd09Mtadjf" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1729324813009, 1731489491863, 1729838339960, 1731143217299, 1730521631190 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1750/Reviewer_zUDk" ], [ "ICLR.cc/2025/Conference/Submission1750/Authors" ], [ "ICLR.cc/2025/Conference/Submission1750/Reviewer_NGsG" ], [ "ICLR.cc/2025/Conference/Submission1750/Reviewer_Peks" ], [ "ICLR.cc/2025/Conference/Submission1750/Reviewer_72pu" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a parameter-efficient method to enhance the adversarial robustness of VLMs. Traditional adaptation methods like full fine-tuning and LoRA are vulnerable to adversarial attacks, leading to performance drops. AdvLoRA improves robustness by utilizing low-rank adaptation, parameter clustering, and adaptive update strategies, reducing computational costs. Experiments show that AdvLoRA outperforms other methods, especially in adversarial scenarios.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe writing is clear. The formulas are correct.\\n2.\\tThe experiment is abundant and multi-dimensional.\\n3.\\tThe research topic is important for VLM.\", \"weaknesses\": \"1.\\tWhile the method is effective, there is no analysis explaining the necessity of reparameterization.\\n2.\\tThe rationale behind using clustering to establish a connection with the parameter in W is insufficiently analyzed.\\n3.\\tThe justification for employing an adaptive update parameter is also lacking.\", \"questions\": \"Please see the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a LoRA-based adversarial training method for visual language models. Unlike directly using LoRA, this method improves the efficiency and robustness of adversarial adaptation by designing a novel reparameterization method based on parameter clustering and parameter alignment. Through extensive experiments, the article demonstrates the effectiveness of AdvLora.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper provides a detailed introduction to the method, making it easy to understand.\\n\\nIt also conducts numerous experiments to demonstrate the effectiveness of the approach.\", \"weaknesses\": \"1. In terms of writing, the entire paper seems to not use the correct citation format; the ICLR template should utilize \\\\citep. Therefore, a thorough review and verification of the paper are necessary to meet writing standards.\\n2. In lines 177-181, L has not used cross-referencing \\\\ref.\\n3. It is a well-known fact that using adversarial samples for adversarial training can degrade model performance, and the introduction of Table 1 is not very clear regarding which model was trained.\\n4. If I am not mistaken, AdvLora seems to only improve the initialization of LoRA, which makes its contribution appear relatively small.\\n5. It is necessary to compare this method with other adversarial training approaches, such as RobustCLIP[1].\\n\\n[1] Schlarmann, Christian, et al. \\\"Robust clip: Unsupervised adversarial fine-tuning of vision embeddings for robust large vision-language models.\\\" arxiv preprint arxiv:2402.12336 (2024).\", \"questions\": \"See Weaknesses.\\nI would like to see the authors provide further clarification on the contributions of their work to confirm whether my understanding is correct.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the adversarial robustness of VLMs during PEFT. The authors improve the efficiency and robustness of adversarial adaptation by designing a reparametrizing method based on parameter clustering and parameter alignment.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper investigated an important problem and proved that the proposed ADVLORA can improve the adversarial robustness of BLIP-like VLMs in a parameter-efficient manner.\", \"weaknesses\": [\"The novelty is very limited since the ADVLORA is proposed by combining adversarial training and LORA. Also, the proposed parameter clustering is not well-motivated.\", \"The pipeline of ADVLORA is unclear. I hope that the authors could further clarify the purpose of Eq.8-12. Are they used for initialization or updated in each iteration?\", \"How to choose the parameter $\\\\alpha$, which is newly introduced compared to the original LORA.\", \"The authors only investigate BLIP, whereas, there are many other VLMs, like CLIP.\", \"The citation format should be revised. And there are many typos, such as \\u201cEq. equation\\u201d in Algorithm1.\"], \"questions\": [\"What is the purpose of Eq.8-12?\", \"How to choose the parameter $\\\\alpha$?\", \"Does the ADVLORA work on other types of VLM?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a parameter-efficient adversarial adaptation method called AdvLoRA, based on Low-Rank Adaptation. Initially, they investigate and reveal the intrinsic low-rank properties present in adversarial adaptation for vision-language models (VLMs). Unlike LoRA, AdvLoRA enhances the efficiency and robustness of adversarial adaptation through a novel reparameterization method that leverages parameter clustering and alignment. Additionally, an adaptive parameter update strategy is introduced to further enhance robustness. With these innovations, AdvLoRA addresses issues related to model security and excessive resource consumption. Extensive experiments demonstrate the effectiveness and efficiency of AdvLoRA.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1.\\tThis paper presents AdvLoRA, a novel parameter-efficient adversarial adaptation method that improves the adversarial robustness of vision-language models (VLMs) through low-rank adaptation, representing an interesting avenue for research.\\n2.\\tThe paper presents comparative results across some mainstream datasets.\\n3.\\tThe method proposed in this paper is practical and applicable.\", \"weaknesses\": \"1.\\tThe comparison between the proposed method and existing adversarial robustness techniques is insufficient, particularly regarding performance across different attack types.\\n2.\\tIn the absence of an analysis of the proposed method's efficiency, clustering may be theoretically time-consuming.\\n3.\\tAblation experiments should be a key component of the study, as it is crucial to evaluate the effectiveness of each module of the proposed method. The current content does not adequately demonstrate the method's effectiveness and lacks a detailed comparative analysis.\\n4.\\tThe reparameterization method lacks theoretical support.\", \"questions\": \"1. Why does AB need to be aligned with W_0 ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5E6VOD7W0z
On Erroneous Agreements of CLIP Image Embeddings
[ "Siting Li", "Pang Wei Koh", "Simon Shaolei Du" ]
Recent research suggests that the failure of Vision-Language Models (VLMs) in visual reasoning could be attributed to the CLIP image encoder ambiguously encoding distinct images into embeddings with high cosine similarity, namely *erroneous agreements*. In this paper, we show that they are not the sole issue, as multimodal large language models (MLLMs) may extract distinct information even from image embeddings with high cosine similarities. On Subset A of the What'sUp benchmark, where the Left/Right image pairs are embedded by CLIP with average cosine similarity greater than 0.99, CLIP's performance is near random guess. In contrast, LLaVA-1.5-7B, which uses the same image encoder as CLIP, achieves nearly 100\% accuracy. This discrepancy is also observed between LLaVA-1.5-7B and CLIP-like models on similar benchmarks. To investigate this performance gap, we conduct controlled experiments to test the effect of varying evaluation methods, training data, and language processing choices. We find that the CLIP image embeddings contain more extractable information than previously suggested, but it is likely obscured by the inadequate vision-language alignment of the CLIP's paradigm. Motivated by this observation, we reconsider the LLaVA-1.5 model on the MMVP benchmark, for which prior work showed that it could not distinguish image pairs with high cosine similarity. We observe a performance gain brought about by an alternative decoding algorithm, which attends more to visual input. Further, we show that the accuracy significantly increases if the model can take both images as input to emphasize their nuanced differences. Both findings indicate that LLaVA-1.5 did not utilize extracted visual information sufficiently. In conclusion, our findings suggest that while improving image encoders could benefit VLMs, there is room to enhance the models with a fixed image encoder through better strategies for extracting and utilizing visual information.
[ "Multimodal Learning", "CLIP", "LLaVA", "cosine similarity", "erroneous agreement" ]
Reject
https://openreview.net/pdf?id=5E6VOD7W0z
https://openreview.net/forum?id=5E6VOD7W0z
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yv2EVpuki1", "kscBD3cVpV", "iEPRplWzaK", "fbunZOqtBg", "da7xG9zc8Y", "d10Ovrirza", "ZWzgStvaA4", "ZQuVtzpLZ3", "Xf37mFV3fS", "UiktcQXmrV", "RhILtVQA5w", "R72E2KZJnw", "QiQDQ8W2Lv", "OF3SAwLSa8", "MeYHbS4Mui", "IE0H4dZFTd", "HCt5g77Kkt", "GeZrRoWFl5", "FQZk9oFLjT", "CqcS9jTDox", "BxvkUCSTw9", "AZQPB9GxjV", "AZBpNyFJHG", "AGyBr0k8UY", "2RsZUbsk0Y", "1pxcrw0VJo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review" ], "note_created": [ 1732666552962, 1732642476696, 1732309402766, 1732759037317, 1732309193278, 1732779515194, 1732802874881, 1732388259536, 1732618637692, 1732309007038, 1730143769289, 1737523616693, 1732779085284, 1730575492656, 1732308224767, 1732665664044, 1732171620221, 1732666011644, 1732665505785, 1732805847802, 1732864423025, 1732310017438, 1730321808691, 1733129875520, 1730535525771, 1734755725224 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_7vBu" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_DMVn" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_DMVn" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_fpLb" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_7vBu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_fpLb" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_2b8n" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_fpLb" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_DMVn" ], [ "ICLR.cc/2025/Conference/Submission4052/Authors" ], [ "ICLR.cc/2025/Conference/Submission4052/Reviewer_2b8n" ], [ "ICLR.cc/2025/Conference/Submission4052/Area_Chair_kLgE" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for taking the time to review our response and to read other reviews. We appreciate your feedback and are glad you found the work interesting. If there are any specific areas where we can provide further clarification or improvements, we\\u2019d be happy to address them. If the revision and our reply have addressed your concerns, we kindly ask you to reconsider your score.\\n\\nThank you again for offering suggestions about our experiments and contributing to the discussion around our work.\"}", "{\"comment\": \"I thank the authors for their response. I would like to stay with my rating.\"}", "{\"title\": \"Our Response to Your Concerns\", \"comment\": \"Thank you for your detailed comments on our paper! We hope that our response below will help you better understand our paper.\\n\\n1. **Your concerns about not showing that a two-layer MLP is important in distinguishing two erroneously agreeing images.**\\n\\n We **did not suggest that the two-layer MLP is the sole key factor** in the performance gap. The argument in our paper is that the visual information extraction strategies of VLMs, by which we mean techniques applied on top of the image encoder, are important. In LLaVA-1.5, this refers to both the two-layer MLP connector and the language model used for answer generation. \\n \\n Although separating the functions of the connector and the language model in MLLMs is an interesting question for future exploration, this is not the focus of our paper. To avoid future confusion, we clarified the meaning of visual information extraction in the introduction of our revised submission (L74-L76).\\n2. **Your concerns regarding our use of Spearman's rank correlation and why \\u03c1 = \\u22121 show that the embeddings are \\\"fully opposed.\\\"**\\n\\n In our paper, Spearman's rank correlation coefficient serves as a toy example to show that cosine similarity does not depict all aspects of vector pairs, and we could still get different information from vectors with high cosine similarity. \\n \\n For the vector pairs in our example ($[10,11,12]^\\\\top$ and $[12,11,10]^\\\\top$), their Spearman's rank correlation coefficient is -1, indicating that they have a perfectly inverse order. Hence, their order information is fully opposed in this sense. \\n \\n For your example of interpreting these two vectors as the embeddings of two dog images, we did not intend to show that the dogs \\\"look opposite in the general sense.\\\" Instead, we show that in the specific sense (order information), the two embeddings are opposite, and we can extract this opposite information through Spearman's rank correlation coefficient. If they are indices of the dog's features you mentioned, the two embeddings could yield opposite answers to the question, \\\"Are the indices of the dog's features in an ascending order?\\\" \\n \\n We also rephrased the relevant description (L243-L244) for the toy example in our submission to better express our intention.\\n\\nFeel free to ask further questions, and we are willing to address them as well!\"}", "{\"title\": \"Follow-up\", \"comment\": \"In response to Reviewer `fpLb`'s suggestion on providing a more diverse and consistent comparison (not limited to OpenAI-CLIP-L-336 vs. LLaVA-1.5), we further updated the Appendix B.5 with evaluation for InstructBLIP-Vicuna-7B and Otter-Image-MPT7B on What'sUp, COCO-spatial, and GQA-spatial benchmark.\"}", "{\"title\": \"Our Response to Your Concerns\", \"comment\": \"Thank you so much for your detailed feedback! Below, we offer responses to assure your concerns:\\n1. **Your feedback that the general storyline of our paper is confusing.**\\n\\n Thank you for your valuable suggestion in our writing!\", \"our_storyline_is_as_follows\": \"Previous research attributes LLaVA-1.5's low performance on the MMVP benchmark to deficiencies in the CLIP vision encoder, specifically erroneous agreements. We first show that LLaVA-1.5 is able to perform well on image pairs with erroneous agreements as shown on the What'sUp benchmark, whereas CLIP cannot. Then we conduct an ablation study to identify the key difference between CLIP and LLaVA-1.5 that causes their performance gap. Finally, we revisit the MMVP benchmark and provide insights into the true cause for LLaVA-1.5's shortcomings.\\n \\n In our paper, What'sUp is considered to be easier for LLaVA-1.5 than MMVP, and the latter remains unsolved. Since the What'sUp and MMVP benchmarks are less known than the benchmarks you mentioned, it is useful to introduce and explain more on model performance and what people expected on them previously. In our submission, we introduced this difference in the introduction (L86) and in task setup (L223). Now we added more context to Figure 1 (both the image and the caption), highlighted this difference more in the introduction (L52-L54, L67), and revised the relevant description (originally in L186, now in L193-L195) to improve reader experience.\\n\\n2. **Your concerns that our scope, task, and dataset choices are not enough to derive general conclusion on the visual reasoning abilities of LLaVA and MMLMs.**\\n\\n Our goal is not to draw conclusions about the **general visual reasoning** abilities of LLaVA or other MMLMs or to prove their supremacy over CLIP generally. We focus on the specific type of task that requires the VLMs to distinguish image pairs with highly similar CLIP embeddings (erroneous agreements), which were claimed to cause VLMs' shortcomings in the paper \\\"Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs.\\\" This motivates our use of the What'sUp and MMVP benchmarks for comparisons and ablation studies, as they consist of such image pairs. On these benchmarks, we show contrary evidence that LLaVA-1.5 can extract distinct information even from these similar embeddings. \\n\\n The other benchmarks you mentioned, including VQA, GQA, OK-VQA, VCR, and those used by LLaVA (VisWiz, SQA, TextVQA, POPE, MME, MMB, Chinese MMB, SEED, LLaVA-Bench, MM-Vet), are valuable for evaluating general visual reasoning capabilities. However, since they do not focus on distinguishing similar paired images, they fall outside the scope of our study.\\n \\n Although our scope is limited to this specific task type, our findings highlight important insights: CLIP and LLaVA-1.5 paradigms employ inherently different mechanisms for extracting visual information, and there is still room to enhance VLMs with a fixed, pretrained image encoder. These findings contribute to the broader goal of designing more capable VLMs for general visual reasoning tasks.\\n \\n We further clarified our scope in the related work section (L104-L107) and add more captions to Figure 2 to avoid confusion.\\n \\nIf you have further concerns or questions, please let us know and we are willing to reply as well!\"}", "{\"title\": \"Follow-up\", \"comment\": \"1. **Your concern that our analysis is limited, as it only includes only a single comparison, and your suggestion on additional experiments.** (Cont'd)\\n\\n *Table 4: Results of InstructBLIP and Otter on COCO-spatial and GQA-spatial benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 2 in our submission.*\\n \\n | | COCO-spatial, one-obj. | COCO-spatial, two-obj. | GQA-spatial, one-obj. | GQA-spatial, two-obj. |\\n | -------- |:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|\\n | CLIP-ViT-L/14-336px | 48.9 | 51.1 | 46.6 | 49.1 |\\n | LLaVA-1.5-7B | **96.0** | **82.3** | **96.0** | **90.7** |\\n | InstructBLIP-Vicuna-7B | 55.0 | 51.4 | 47.8 | 50.2 |\\n | Otter-Image-MPT7B | 51.9 | 50.0 | 54.1 | 51.9 |\\n | Random Chance | 50.0 | 50.0 | 50.0 | 50.0 |\\n \\n Compared with LLaVA-1.5, with different architectures and training data, Otter and InstructBLIP struggle on this benchmark (close to random chance except On/Under for InstructBLIP). Due to their poor performance, we did not include the evaluation of their visual components here. Hence, we can see that MLLMs do not guarantee more effective extraction from frozen image encoder. Good design of MLLM architecture and training data synergize to provide strong visual information extraction ability. \\n \\n2. **Your concern that our claim on LLaVA-1.5 is not fully substantiated.**\\n\\n Our finding that LLaVA-1.5 performs well on image pairs with CLIP embeddings of high cosine similarity emphasizes the **feasibility** of extracting distinct information from such pairs using MLLMs, but not any guarantee of MLLMs to perform well on all similar image pairs. There are many factors beyond image encoder at play in MLLM's visual reasoning, such as language model's reasoning ability.\\n \\n This feasibility reveals that highly similar CLIP embeddings are not the main culprit of the suboptimality of LLaVA-1.5 on MMVP benchmark, and there should be other causes. As an initial exploration of these causes, our experiments in discussion section show that visual information is not fully utilized in answer generation, implying the possibility of improving model performance with image encoder fixed. \\n\\nIf you have further questions or concerns, we would be happy to address them. Thank you again for your feedback!\"}", "{\"title\": \"response\", \"comment\": \"I thank the authors for the revised version. However, i would like to maintain my score, as I find the claim that the authors make not well-supported by experimental evidence. The benchmarks the authors used are not enough to support this claim. Specifically, I asked for experiments on other datasets, including the ones used in the original paper [R1] that the authors counteract, but the authors did not address this during the discussion period. Although I understand that WhatsUp and MMVP are the ones related to erroneous agreements, I still don't think there can be generalization insights by just investigating these datasets. Moreover, only LLava is considered, and we cannot get generalizable insights from one model. The results could be of the way Llava is trained or because of its architecture. More MLLMs are needed to verify this claim. Given these reasons, i maintain my score.\\n\\n\\n[R1] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs\"}", "{\"title\": \"response to authors\", \"comment\": \"I thank the authors for their response. I wish to stay with my score.\"}", "{\"comment\": \"Thank you for the detailed response. After thoroughly reviewing your explanation and the feedback from other reviewers, I have identified additional concerns that warrant clarification.\\n\\nThe performance comparison between LLaVA-1.5 and its vision encoder, OpenAI-CLIP-L/336, on the MMVP-VLM benchmark is interesting. However, the analysis appears limited, as it includes only a single comparison. It would be better if the authors provided a more diverse and consistent comparison, not limited to OpenAI-CLIP-L-336 vs. LLaVA-1.5, but also EVA-01-CLIP-g vs. InstructBLIP (Vicuna7b, Vicuna13b, Flan T5xl). In this way, the authors can analyze the influence of the different Visual Encoders and the LLM architectures on the performance boost.\\n\\nFurthermore, while it is interesting to observe improvements in the instructional model, the gains may result from LLaVA training. It adapts the language model (LLM) to leverage the frozen vision encoder better. This could enhance the LLM\\u2019s ability to extract visual information from sequence embeddings compared to CLIP models relying on the CLS token to align with smaller and limited language models. However, despite these improvements, the overall performance remains suboptimal. Consequently, I do not find the claim that \\u201cLLaVA-1.5 can distinguish images with CLIP embeddings of high cosine similarity, indicating that erroneous agreements are not the bottleneck of their visual reasoning performance on image pairs\\u201d to be fully substantiated.\\n\\nGiven these considerations, I maintain my initial score.\"}", "{\"title\": \"Follow-up\", \"comment\": \"1. **How do these findings generalize to other MLLMs beyond LLaVA-1.5?** (Cont'd)\\n\\n *Table 4: Results of LLaMA-3-V-8B and Phi-3-V-3.8B on COCO-spatial and GQA-spatial benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 2 in our submission.* \\n \\n | | COCO-spatial, one-obj. | COCO-spatial, two-obj. | GQA-spatial, one-obj. | GQA-spatial, two-obj. |\\n | -------- |:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|\\n | CLIP-ViT-L/14-336px | 48.9 | 51.1 | 46.6 | 49.1 |\\n | LLaVA-1.5-7B | 96.0 | 82.3 | 96.0 | 90.7 |\\n | LLaMA-3-V-8B | **97.8** | 83.2 | **99.0** | 90.7 |\\n | Phi-3-V-3.8B | 97.3 | **85.2** | 98.0 | **91.1** |\\n | Random Chance | 50.0 | 50.0 | 50.0 | 50.0 |\\n \\n Table 1~3 show that LLaMA-3-V-8B and Phi-3-V-3.8B can also extract distinct information from highly similar embeddings, though they are weak on some prepositions. These results show that our findings generalize to these two MLLMs with different scales and language models. We included these results in Appendix B.5 of our revised submission. \\n\\n2. **What specific mechanisms allow MLLMs to extract distinct information from seemingly similar embeddings?**\\n\\n In contrast to CLIP's dot product mechanism, MLLMs enable non-linear extraction of visual information from image embeddings and support complex interactions between image and text. For example, they assign different attention weights to image tokens and input text tokens when generating specific output text tokens. This flexibility allows the model to focus on distinct parts of the input, depending on the task at hand.\"}", "{\"summary\": \"Previous works have argued that the poor performance of VLMs on simple visual reasoning tasks is due to their dependence on CLIP encoder. They show that CLIP can encode two visually different images with high cosine similarity (called erroneous agreement) and argue that many VLMs fail due because they use CLIP as their vision encoder.\\n\\nIn this paper the authors show that with better extraction and utilization methods, clip encoder can still be used for downstream tasks of visual reasoning. They show experiments with LLaVA-1.5 and show that it performs good on benchmarks despite using CLIP as its vision encoder.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Important analysis shown in section 4 (Investigating the performance gap)- this section answers the questions related to training data, language model and evaluation method. This analysis is important to make the claim that visual information extraction is the key factor in determining the performance gap on downstream tasks. And these other factors (eval method, language encoder, training data) are not contributing much to the improved performance.\\n\\n2. Detailed benchmarking of the models on different datasets and good ablation studies.\\n\\n3. They show, using a different decoding method, that even with a fixed pre-trained image encoder if we try to 'force' VLMs to attend to visual features while decoding (and not just relying on language priors), we can perform good on downstream visual reasoning tasks. Although they used a previously proposed decoding strategy M3ID (Favero et al., 2024).\", \"weaknesses\": \"1. The authors show that the visual feature extraction technique in LLaVA (a two layer MLP) is an important step in distinguishing between two erroneous images. But they do not provide an convincing argument on why is it an important step. An analysis on \\\"why just adding a 2-layer MLP on top of pre-trained CLIP makes it so much better?\\\" would have been an amazing addition to the paper.\\n\\n2. On Spearman's rank correlation (also asked in the questions): Since CLIP is trained using loss based on cosine similarity, I think using Spearman's rank correlation to show that two embeddings are \\\"fully opposed\\\" is not correct. For example, consider the example given on LN 232-233. Although the ranks of the dims are reversed giving \\u03c1 = \\u22121, their absolute values are pretty close. And if we assume (in an ideal world) them to be separable features, for example the embeddings could be of dog images and the features are 'ear-length' , 'fur color', 'nose-shape', both the embeddings will still show two very similar looking dogs (and not 'fully opposite') even though the embedding might have \\u03c1 = \\u22121.\", \"questions\": \"Would a high negative Spearman's rank correlation show that the embeddings are quite different?\", \"ln_232_236_says\": \"\\\"While SC (fv(v1), fv(v2)) > 0.989, Spearman\\u2019s rank correlation coefficient can tell their sharp difference: \\u03c1 = \\u22121, showing that they are fully opposed in this sense. Therefore, the difference in visual inputs might still be extracted through other means when erroneous agreements occur\\\"\\n\\nHow does \\u03c1 = \\u22121 show that the embeddings are 'fully opposed'? If the authors could show this or cite a paper that shows this, that would be great.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Our Response to Your Further Concerns\", \"comment\": \"Thank you for taking time to read our response and to read other reviews. We reply to your further concerns as follows:\\n\\n1. **Your concern that our analysis is limited, as it only includes only a single comparison, and your suggestion on additional experiments.**\\n\\n Thank you for your suggestion on providing a more diverse and consistent comparison using additional models. We would like to first clarify that we did not claim that all MLLMs outperform their visual components. For exploring whether the performance boosts generalize to other MLLM-image encoder pairs, in addition to LLaVA-1.5 (as well as LLaMA-3-V-8B and Phi-3-V-3.8B, which we included later in Appendix B.5), we evaluated two other MLLMs with distinct paradigms and training datasets: **InstructBLIP-Vicuna-7B** with EVA-CLIP-ViT-G/14 and **Otter-Image-MPT7B** with CLIP-ViT-L/14. These models were selected because they freeze the image encoder during training, aligning with the objectives of our comparison, unlike many MLLMs that finetune their image encoders. Meanwhile, they utilize the image embeddings in different ways from LLaVA-like models. The results on the What'sUp, COCO-spatial, and GQA-spatial benchmarks are as follows. We also included these results in Appendix B.5 of our revised submission. \\n \\n *Table 1: Results of InstructBLIP and Otter on What'sUp Subset A. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 1 in our submission.*\\n \\n\\n | | Subset A, Left/Right, Indiv. | Subset A, Left/Right, Pairs | Subset A, On/Under, Indiv. | Subset A, On/Under, Pairs |\\n | -------- |:---------------------:|:---------------------:|:-----------------:|:---------------------:|\\n | CLIP-ViT-L/14-336px | 49.0 | 1.9 | 61.7 | 23.3 |\\n | LLaVA-1.5-7B | **99.0** | **98.1** | 80.1 | 60.2 | \\n | InstructBLIP-Vicuna-7B | 50.0 | 1.9 | **93.7** | **87.4** | \\n | Otter-Image-MPT7B | 50.0 | 1.0 | 56.8 | 13.6 |\\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n\\n *Table 2: Results of InstructBLIP and Otter on What'sUp Subset B. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 1 in our submission.*\\n \\n\\n | | Subset B, Left/Right, Indiv. | Subset B, Left/Right, Pairs | Subset B, Front/Behind, Indiv. | Subset B, Front/Behind, Pairs |\\n | -------- |:------------:|:---------:|:------------:|:------------:|\\n | CLIP-ViT-L/14-336px | 54.9 | 10.8 | 51.5 | 7.8 |\\n | LLaVA-1.5-7B | **100** | **100** | **98.5** | **97.1** |\\n | InstructBLIP-Vicuna-7B | 50.0 | 0.0 | 50.0 | 5.9 |\\n | Otter-Image-MPT7B | 50.0 | 0.0 | 51.5 | 11.8 |\\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n \\n *Table 3: Results of InstructBLIP and Otter on What'sUp (four-way classification) benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 2 in our submission.*\\n \\n | | Subset A, Indiv. | Subset A, Pairs | Subset A, Set of 4 | Subset B, Indiv. | Subset B, Pairs | Subset B, Set of 4 |\\n | -------- |:-------:|:------:|:--------:|:----------:|:--------:|:---------:|\\n | CLIP-ViT-L/14-336px | 28.9 | 1.0 | 0.0 | 27.2 | 1.0 | 0.0 |\\n | LLaVA-1.5-7B | **62.1** | **41.3** | **14.6** | **74.0** | **61.8** | **23.5** |\\n | InstructBLIP-Vicuna-7B | 37.6 | 25.7 | 0.0 | 29.9 | 15.2 | 0.0 |\\n | Otter-Image-MPT7B | 24.5 | 2.4 | 0.0 | 24.8 | 3.0 | 0.0 |\\n | Random Chance | 25.0 | 6.3 | 0.4 | 25.0 | 6.3 | 0.4 |\"}", "{\"summary\": \"This paper provides a comprehensive study to analyze the answers supplied by the VLMs. Specifically, It compares the performances of CLIP and LlaVa-1.5-7B in the What\\u2019s Up and MMVP benchmarks. These benchmarks ask questions about a pair of images that contain the same objects and background but in different positions. This paper shows that the LlaVa-1.5-7B can perform better than CLIP in these benchmarks even when LlaVa uses CLIP as a visual encoder, and the average cosine similarity of the CLIP embedding of the image pair is greater than 0.95. Moreover, it provides ablation studies to explain this behavior.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper provides some interesting insights to show that the metric commonly used to measure the embedding similarity (Cosine Similarity) does not depict all aspects of vector pairs. Therefore, it suggested a complementary metric, Spearman\\u2019s rank correlation coefficient. However, table 1 only provides the average Cosine Similarity.\", \"weaknesses\": \"The paper is challenging to follow, primarily due to the absence of a clear statement of its main contributions in the Introduction. Its content closely parallels the CVPR24 paper, \\\"Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs,\\\" raising concerns about the originality of this work. The CVPR24 paper highlights that Visual Language Models (VLMs), often relying on CLIP as the visual encoder, struggle with recognizing fine-grained details, such as object locations. It introduces the MMVP benchmark to evaluate these limitations comprehensively. I encourage the authors to clarify how their contributions provide novel insights beyond this existing research.\", \"questions\": \"I recommend including Spearman's rank correlation coefficient in Table 1 to enhance the analysis. Additionally, a more comprehensive study would be valuable. For example, could the authors provide Spearman's rank correlation coefficient and cosine similarity for the questions with the highest- and lowest-accurate answers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Our Response to Your Concerns\", \"comment\": \"Thank you for your feedback on our paper and your insightful questions! We really appreciate your suggestions, which helps improve our work. Here are our responses to your questions:\\n1. **How do these findings generalize to other MLLMs beyond LLaVA-1.5?**\\n\\n We first would like to clarify that we did not intend to show that all MLLMs are better than CLIP. Instead, our main focus is to explore whether erroneous agreements are bottlenecks for VLMs (whether we can extract distinct information from highly similar CLIP embeddings or not), and we found that LLaVA-1.5 can capture such nuance, while CLIP cannot. \\n \\n Regarding your question, we agree that generalizing our finding to other MLLMs is an interesting direction and good supplementary to our work. For serving our purpose above, we focus on MLLMs that have CLIP as the image encoder and freeze it during training (otherwise, this variable is not controlled). Hence, we evaluate two other MLLMs, **LLaMA-3-V-8B** and **Phi-3-V-3.8B**, which use frozen CLIP-ViT-L/14-336px as the vision encoder. We use the model weights provided in https://github.com/mbzuai-oryx/LLaVA-pp.\\n\\n *Table 1: Results of LLaMA-3-V-8B and Phi-3-V-3.8B on What'sUp Subset A. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 1 in our submission.*\\n\\n | | Left/Right, Indiv. | Left/Right, Pairs | On/Under, Indiv. | On/Under, Pairs |\\n | -------- |:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|\\n | CLIP-ViT-L/14-336px | 49.0 | 1.9 | 61.7 | 23.3 |\\n | LLaVA-1.5-7B | 99.0 | 98.1 | 80.1 | 60.2 | \\n | LLaMA-3-V-8B | 90.3 | 80.6 | 57.8 | 20.4 | \\n | Phi-3-V-3.8B | **100** | **100** | **85.4** | **70.9** |\\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n\\n *Table 2: Results of LLaMA-3-V-8B and Phi-3-V-3.8B on What'sUp Subset B. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 1 in our submission.*\\n | | Left/Right, Indiv. | Left/Right, Pairs | Front/Behind, Indiv. | Front/Behind, Pairs |\\n | -------- |:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|\\n | CLIP-ViT-L/14-336px | 54.9 | 10.8 | 51.5 | 7.8 |\\n | LLaVA-1.5-7B | **100** | **100** | **98.5** | **97.1** |\\n | LLaMA-3-V-8B | 71.1 | 46.1 | 69.1 | 41.2 |\\n | Phi-3-V-3.8B | **100** | **100** | 56.9 | 13.7 |\\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n\\n *Table 3: Results of LLaMA-3-V-8B and Phi-3-V-3.8B on What'sUp (four-way classification) benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px and LLaVA-1.5-7B from the Table 2 in our submission.*\\n | | Subset A, Indiv. | Subset A, Pairs | Subset A, Set of 4 | Subset B, Indiv. | Subset B, Pairs | Subset B, Set of 4 |\\n | -------- |:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|:-----------------------------:|:-------------------------------:|\\n | CLIP-ViT-L/14-336px | 28.9 | 1.0 | 0.0 | 27.2 | 1.0 | 0.0 |\\n | LLaVA-1.5-7B | **62.1** | **41.3** | 14.6 | **74.0** | **61.8** | **23.5** |\\n | LLaMA-3-V-8B | 60.0 | 36.4 | **17.5** | 70.1 | 43.6 | 20.6 |\\n | Phi-3-V-3.8B | 58.0 | 36.4 | 15.5 | 71.8 | 55.4 | 12.8 |\\n | Random Chance | 25.0 | 6.3 | 0.4 | 25.0 | 6.3 | 0.4 |\"}", "{\"comment\": \"We thank you for your reply and for taking the time to consider our responses. If there are any points where the paper could benefit from additional clarification or revisions, we would be happy to address them. If there are no further concerns and you find the revised version satisfactory, we kindly ask if you consider revisiting your score. Again, we value your review and hope our revision align with your expectations.\\n\\nThank you again for your time and constructive engagement.\"}", "{\"title\": \"Our Response to Your Concerns\", \"comment\": \"Thank you for your comments and suggestions.\\n\\nFirst of all, we want to **point out a misunderstanding in your review**.\\nIn the \\\"Strengths\\\" section of your review, you concluded that, \\n- *\\\"Therefore, it suggested a complementary metric, Spearman's rank correlation coefficient.\\\"* \\n\\nHowever, Spearman's rank correlation coefficient is only used as a toy example in Section 3.2 for explaining why we hypothesize that distinct information might still be preserved in embeddings with high cosine similarity and can be extracted. This coefficient does not appear in other contexts in our paper, and is not part of our core contribution.\\n\\nAfter clarifying this, we address your concerns as follows:\\n1. **Your concerns on what our main contributions are and how our contributions provide novel insights beyond the CVPR 2024 paper.**\", \"our_contributions_are_summarized_as_follows\": \"(1) First, we show evidence on the What'sUp benchmark that LLaVA-1.5 can distinguish images with CLIP embeddings of high cosine similarity, indicating that erroneous agreements are not the bottleneck of their visual reasoning performance on image pairs. \\n \\n (2) Second, since we observe that CLIP has poor performance when image embeddings have high cosine similarity, we conduct ablation studies to see what difference between CLIP and LLaVA-1.5 causes their performance gap. We find that evaluation methods, language encoder, and training data are not the key factors that cause the gap, and we suggest that their difference in model paradigm (training and inference methods) plays an important role here.\\n \\n (3) Third, we explore the true bottleneck of LLaVA-1.5 through two attempts in the discussion section. We find that LLaVA-1.5 does not attend enough to the visual input, and more visual information is preserved in CLIP image embeddings and aligned correctly than the pair accuracy suggested on the MMVP benchmark.\\n \\n Our contribution (1) **counters the claim** in the CVPR2024 paper \\\"Eyes Wide Shut\\\" that MLLMs fail in simple questions on image pairs because their pre-trained CLIP vision encoders overlook crucial visual details in images as they encode them into highly similar embeddings. While this previous paper states that \\\"vision models might become a bottleneck in multimodal systems,\\\" we suggest the potential of improving current MLLMs with fixed vision models through better utilizing visual information.\\n2. **Your suggestion on including Spearman's rank correlation in Table 1 and providing Spearman's rank correlation coefficient and cosine similarity for the questions with the highest- and lowest-accurate answers.**\\n \\n As we explained at the begining, Spearman's rank correlation coefficient is not a metric we proposed to use on these benchmarks. It only serves as a toy example to show that cosine similarity does not depict all aspects of vector pairs, and we could still get different information from vectors with high cosine similarity. This motivates us to explore whether, in image embeddings with high cosine similarity (erroneous agreements), distinct information is actually preserved and can be extracted by MLLMs.\\n\\nWe hope that our response could help you better understand the content and the contributions of our paper. If you have further questions or concerns, we are willing to address them as well!\"}", "{\"comment\": \"Thank you for taking the time to review our responses. If there is anything else we can clarify or improve in the paper, we would be happy to address it. If you feel the revised version addresses your concerns, we kindly ask you to consider updating your score.\\n\\nWe truly value your feedback and hope our revisions meet your expectations. Thank you again for your time and thoughtful engagement.\"}", "{\"comment\": \"I have read the author's response and the other rebuttal reviews. The work is very interesting, but I will maintain my score.\"}", "{\"comment\": \"I thank the authors for their great effort in providing these new results. However, I consider it a failure in the evaluation protocol. The novel experiments are not comparable with OpenAI-CLIP-ViT-L/14-336 because, as the author outlined, they leverage different visual encoders. InstructBLIP-Vicuna-7B uses EVA-CLIP-ViT-G/14, Otter-Image-MPT7B uses CLIP-ViT-L/14. If the authors want to analyze these methods, they should also provide the results of EVA-CLIP-ViT-G/14 and CLIP-ViT-L/14. Moreover, it is worth noting that EVA-CLIP-ViT-G/14 is better than OpenAI-CLIP-ViT-L-336, as seen in the CVPR2024 paper \\\"Eyes Wide Shut\\\". So, EVA-CLIP could surpass the OpenAI-CLIP-ViT-L-336 and even match the InstructBLIP, indicating that the phenomena the authors highlight could be particular to LLaVA.\"}", "{\"title\": \"Response to Your Further Concerns\", \"comment\": \"Thank you for checking our new response!\\n\\nFirst of all, as we clarified in our previous replies, we would like to reiterate that we did not claim that MLLMs are guaranteed to outperform their visual components. Instead, our argument in the paper is that LLaVA-1.5 is able to extract distinct information from highly similar CLIP embeddings, so erroneous agreements are not the main culprit of its failure on image pairs with such property (e.g., the MMVP benchmark). \\n\\nAs for your concerns, we responded as follows.\\n\\n- *\\\"However, I consider it a failure in the evaluation protocol ... If the authors want to analyze these methods, they should also provide the results of EVA-CLIP-ViT-G/14 and CLIP-ViT-L/14.\\\"*\\n\\n The reason why we did not test the results for EVA-CLIP-ViT-G/14 in our previous reply is that we already observed low performance of InstructBLIP and Otter on the What'sUp benchmark. (Their performances on most tasks we evaluated on are below or close to random chance as you can see from the tables.) So they could not outperform their image encoder on these benchmarks.\\n\\n If you are interested, we now provide the results for CLIP-ViT-L/14 and EVA-CLIP-ViT-G/14 below. The results for CLIP-ViT-L/14 correspond to Table 2 of our submission, while those for EVA-CLIP-ViT-G/14 were obtained using the model pretrained on a filtered version of LAION-400M, as provided by the OpenCLIP repository. We can see that these image encoders have performance below or close to random chance on these benchmarks. \\n \\n *Table 1: Results of CLIP-ViT-L/14 and EVA-CLIP-ViT-G/14 on What'sUp Subset A. For comparison, we also include the results for CLIP-ViT-L/14-336px.*\\n | | Subset A, Left/Right, Indiv. | Subset A, Left/Right, Pairs | Subset A, On/Under, Indiv. | Subset A, On/Under, Pairs |\\n | ----- |:-----:|:---------:|:--------:|:----------:|\\n | CLIP-ViT-L/14-336px | 49.0 | 1.9 | 61.7 | 23.3 |\\n | CLIP-ViT-L/14 | 49.0 | 2.9 | 60.2 | 21.4 |\\n | EVA-CLIP-ViT-G/14 | 49.0 | 1.0| 56.3 | 14.6 | \\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n\\n *Table 2: Results of CLIP-ViT-L/14 and EVA-CLIP-ViT-G/14 on What'sUp Subset B. For comparison, we also include the results for CLIP-ViT-L/14-336px.*\\n | | Subset B, Left/Right, Indiv. | Subset B, Left/Right, Pairs | Subset B, Front/Behind, Indiv. | Subset B, Front/Behind, Pairs |\\n | ---- |:------:|:---------:|:-------:|:---------:|\\n | CLIP-ViT-L/14-336px | 54.9 | 10.8 | 51.5 | 7.8 |\\n | CLIP-ViT-L/14 | 54.9 | 11.8| 51.0 | 9.8 |\\n | EVA-CLIP-ViT-G/14 | 50.1 | 4.9 | 52.9 | 14.7 |\\n | Random Chance | 50.0 | 25.0 | 50.0 | 25.0 |\\n\\n *Table 3: Results of CLIP-ViT-L/14 and EVA-CLIP-ViT-G/14 on What'sUp (four-way classification) benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px.*\\n | | Subset A, Indiv. | Subset A, Pairs | Subset A, Set of 4 | Subset B, Indiv. | Subset B, Pairs | Subset B, Set of 4 |\\n | -------- |:---------:|:------:|:-----:|:---------:|:----------:|:----------:|\\n | CLIP-ViT-L/14-336px | 28.9 | 1.0 | 0.0 | 27.2 | 1.0 | 0.0 |\\n | CLIP-ViT-L/14 | 26.7 | 1.0 | 0.0 | 25.7 | 1.5 | 0.0 |\\n | EVA-CLIP-ViT-G/14 | 28.2 | 2.4 | 0.0 | 27.9 | 5.4 | 0.0 |\\n | Random Chance | 25.0 | 6.3 | 0.4 | 25.0 | 6.3 | 0.4 |\\n\\n *Table 4: Results of CLIP-ViT-L/14 and EVA-CLIP-ViT-G/14 on COCO-spatial and GQA-spatial benchmark. For comparison, we also include the results for CLIP-ViT-L/14-336px.*\\n\\n | | COCO-spatial, one-obj. | COCO-spatial, two-obj. | GQA-spatial, one-obj. | GQA-spatial, two-obj. |\\n | -------- |:---------:|:------:|:----------:|:--------------:|\\n | CLIP-ViT-L/14-336px | 48.9 | 51.1 | 46.6 | 49.1 |\\n | CLIP-ViT-L/14 | 49.1 | 50.2 | 46.0 | 48.1 |\\n | EVA-CLIP-ViT-G/14 | 45.9 | 50.5 | 44.4 | 49.8 |\\n | Random Chance | 50.0 | 50.0 | 50.0 | 50.0 |\\n \\n- *\\\"EVA-CLIP could surpass the OpenAI-CLIP-ViT-L-336 and even match the InstructBLIP, indicating that the phenomena the authors highlight could be particular to LLaVA.\\\"*\\n\\n (1) From the results above, we did not observe that EVA-CLIP surpasses InstructBLIP on these benchmarks.\\n \\n (2) Regardless of whether EVA-CLIP surpasses InstructBLIP, our conclusion remains unchanged, as we do not claim that all MLLMs outperform their visual components.\\n\\nIf you have further questions or concerns, we are willing to address them!\"}", "{\"title\": \"Summary of the Reviews, Responses, and Changes to the Paper\", \"comment\": \"We thank the reviewers for their comments and suggestions regarding our submission. We appreciated positive comments on our contributions, e.g., the paper \\\"challenges and refines an important assumption in the field about VLM limitations\\\" from Reviewer `2b8n`, and it \\\"delivers a valuable message to the community ... counters that (previous) claim\\\" from Reviewer `DMVn`. We also found that reviewers gave positive feedbacks on our ablation studies with analysis (Reviewer `7vBu`) and the interesting observation in the discussion (Reviewer `7vBu` and Reviewer `DMVn`).\\n\\nWe also thank the reviewers for their constructive criticisms. Below, we summarize the major issues and our responses to them:\\n\\n### **Concerns on the storyline being unclear and confusing from Reviewer `DMVn` and Reviewer `fpLb`.**\", \"our_storyline_is_as_follows\": \"Previous research attributes LLaVA-1.5's low MMVP performance to deficiencies in the CLIP vision encoder (namely erroneous agreements). We show that LLaVA-1.5 (which uses the CLIP vision encoder) excels on image pairs with erroneous agreements on What'sUp benchmark, whereas CLIP model (using both vision encoder and text encoder) has a poor performance. Through ablation, we pinpoint the key differences between LLaVA-1.5 and CLIP that drives this gap and revisit MMVP benchmark to uncover the true cause of LLaVA-1.5's shortcomings.\\n \\nFrom the feedback of Reviewer `DMVn`, we found that the What'sUp and MMVP benchmark and model performance on them might be unfamiliar to the readers and potentially causes confusion. So we added more description in the introduction to indicate that LLaVA-1.5 perform well on What'sUp but not on more challenging MMVP, guiding the readers to better understand the storyline. \\n \\n### **Confusion about the function and description of the Spearman's rank correlation from Reviewer `7vBu` and Reviewer `fpLb`.** \\n\\n*First, Spearman's rank correlation only serves as a toy example and does not appear in any other contexts in our submission*. It motivates our exploration and comparison using LLaVA-1.5 on extracting information from highly similar embeddings. However, we did not propose to put it into use in experiments and it is not part of our core contribution. \\n \\nSecond, for Reviewer `7vBu`'s question about how high negative Spearman's rank correlation show that the embeddings are \\\"fully opposed,\\\" we did not intend to show that the vectors are fully opposed in the sense of cosine similarity. Instead, they contain opposed order information. We clarify this by replacing \\\"they are opposed in this sense\\\" with \\\"their order information is fully opposed\\\" in the new version, which better conveys our intended meaning.\\n\\n## Summary of changes to the draft\\n\\nWe have made several changes to the draft based on the reviewers' suggestions (all changes are highlighted in blue in the updated PDF). Specifically:\\n\\n- In response to Reviewer `2b8n`'s concern about the generalizability of our findings beyond LLaVA-1.5, we included evaluations on What'sUp for two other MLLMs (LLaMA-3-V-8B and Phi-3-V-3.8B) in Appendix B.5, demonstrating that our findings in Section 3 generalize to these models as well.\\n- For concerns on the storyline raised by Reviewer `DMVn`, we refined the Figure 1 and its caption, and added descriptions in introduction (L52-L54, L67) and Section 3 (L193-L195).\\n- For Reviewer `DMVn`'s concerns about the scope and dataset choice, we added more captions to Figure 2 and included our reasoning in the related work section (L104\\u2013L107).\\n- To clarify the meaning of \\\"visual information extraction\\\" and the toy example as suggested by Reviewer `7vBu`, we rephrased descriptions in the introduction (L74\\u2013L76) and Section 3 (L243\\u2013L244).\\n- We combined the original Table 5 and Table 6 to save space and improve readability.\"}", "{\"summary\": \"This paper examines the performance of LLaVA-1.5-7B on visual reasoning tasks, specifically WhatsUp and MMVP, and concludes that its suboptimal performance is not due to CLIP's visual features. While CLIP visual features effectively capture semantic similarities, they occasionally misinterpret spatial differences in object placement (e.g., \\\"mug on the left\\\" vs. \\\"mug on the right\\\"), which results in high cosine similarity (over 0.95) despite subtle image differences\\u2014referred to as \\\"erroneous agreements.\\\" The authors show that CLIP\\u2019s visual features are accurate; instead, they attribute the performance issues to LLaVA not making effective use of these features. They further demonstrate that poor alignment between visual and textual inputs, not the visual features themselves, explains the bad performance in CLIP models for these tasks and datasets. Unlike CLIP, LLaVA does not exhibit this alignment problem, and this is shown quantitatively. Finally, the authors try better decoding strategies in Llava like M3ID such that the decoding better makes use of the visual features. They also show that multiple image inputs works better to highlight the difference in images. They also explore performance gaps related to evaluation methods, training data, and the text encoder.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper delivers a valuable message to the community by advocating for enhancing Multimodal LLMs and keeping the image encoder fixed. Previous research suggested that the image encoder introduced issues by producing \\\"erroneous agreements\\\" (similar embeddings for semantically similar but visually distinct images). However, this paper counters that claim, attributing the problem instead to the the model not utilizing these visual features effectively.\", \"Interesting observation of better decoding algorithms and methods for evaluating specific tasks.\"], \"weaknesses\": [\"There is an incoherent story. The abstract initially suggests that LLaVA performs well on reasoning tasks and achieves high accuracy, yet later the paper claims LLaVA performs poorly on MMVP, contradicting the initial statement. They also mention that LLava is able to extract the correct information from the visual features, and that it does not face issues (L186, and demo image). Only later is it clarified that LLaVA performs well on WhatsUp but not on MMVP. In general, I feel there is an unclear and confusing story.\", \"WhatUp, MMVP, COCO-spatial and GQA-spatial are not really well-known datasets and publicly-agreed on to measure reasoning. I actually came to know them after reading this paper. Measuring reasoning on MMLMs are usually not done on these datasets. These datasets are not enough to reflect model reasoning and to come up with general conclusions about LLava or MMLMs in general. The authors don\\u2019t show ablation and analysis results using their ablation strategies, on important reasoning tasks such as VQA, GQA, OK-VQA, VCR and others (specifically, those that LLava reports on). I feel the scope, task and datasets are not enough to reach the standard required for ICLR.\"], \"questions\": \"In general the second weakness is the biggest to me. I would like to hear what the authors say on this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Our Response to Your Concerns\", \"comment\": \"Thank you for your reply! Below, we offer further explanation and evidence to assure your concerns:\\n\\n1. *\\\"The benchmarks the authors used are not enough to support this claim. Specifically, I asked for experiments on other datasets, including the ones used in the original paper [R1] that the authors counteract, but the authors did not address this during the discussion period.\\\"*\\n\\n The benchmarks used in the original paper [R1] include MMVP/MMVP-VLM, LLaVA-Bench, POPE, LLaVA-In-the-Wild, MMBench, TextVQA, VQA-v2, and MM-Vet. Apart from MMVP/MMVP-VLM we used, the rest do not focus on distinguishing two images with erroneous agreements, and thus do not have paired similar images with opposing answers to questions. Hence, results on these benchmark do not contribute to illustrate that LLaVA-1.5 shows a better visual information extraction ability than CLIP on images with erroneous agreements.\\n\\n To address the need for more generalizable insights, we tested CLIP and LLaVA-1.5 on another benchmark, NaturalBench [R2], to show the generalizability of our argument. It is a vision-centric benchmark that challenges vision-language models with pairs of simple questions about natural imagery. This dataset contains 1,900 test cases, each consisting of two images and two questions with opposing answers. Since NaturalBench follows MLLM's format but not CLIP's, we converted the questions and options into captions using GPT-4o-mini for CLIP evaluation. We report the individual accuracy and pairs accuracy as we did in Table 3 of our submission. Below are the results:\\n\\n *Table 1: Results of CLIP-ViT-L/14-336px and LLaVA-1.5-7B on NaturalBench. The results for LLaVA-1.5-7B come from the Acc. and Q-Acc. in Table 1 of the NaturalBench paper.*\\n \\n | | Indiv. Acc. | Pairs Acc | \\n | --- |:-----:|:----:|\\n | CLIP-ViT-L/14-336px | 56.5 | 20.5 |\\n | LLaVA-1.5-7B | **67.3** | **37.7** |\\n | Random Chance | 50.0 | 25.0 |\\n\\n Given that the images were not collected by high CLIP cosine similarity, we further consider their performance on 320 testcases in NaturalBench with the highest CLIP image cosine similarity (>0.85):\\n \\n *Table 2: Results of CLIP-ViT-L/14-336px and LLaVA-1.5-7B on 320 testcases from NaturalBench.*\\n \\n | | Indiv. Acc. | Pairs Acc | \\n | --- |:-----:|:----:|\\n | CLIP-ViT-L/14-336px | 53.7 | 13.9 |\\n | LLaVA-1.5-7B | **63.0** | **28.8** |\\n | Random Chance | 50.0 | 25.0 |\\n\\n From the results, we can see that the performance gap between LLaVA-1.5 and CLIP on similar image pairs generalizes to this benchmark, which has larger size than MMVP and has more diverse visual content than What'sUp, showing the generalizability of our finding.\\n\\n2. *\\\"Moreover, only LLava is considered, and we cannot get generalizable insights from one model. The results could be of the way Llava is trained or because of its architecture.\\\"*\\n \\n First, we showed by additional results of LLaMA-3-V-8B and Phi-3-V-3.8B that our findings generalize to other LLaVA-like models of varying scales and language backbones. \\n \\n Second, we acknowledge the possibility that the way LLaVA-like models are trained or their architecture might be important to realize strong extraction ability. We did not intend to prove the advantage of all MLLMs over their image encoders, since we note that MLLMs adopt various model architectures and training pipelines, and some of them might not be effective in visual information extraction. Nevertheless, our findings on LLaVA-like models show the feasibility of extracting distinct information from highly similar image embeddings through MLLMs, together with the effectiveness of LLaVA's methodology compared with CLIP on such image pairs.\\n\\nWe hope these additional analyses and clarifications address your concerns. Thank you for your valuable feedback!\\n\\n[R2] NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples.\"}", "{\"summary\": \"The paper challenges the prevailing belief that Vision-Language Models' (VLMs) failures in visual reasoning are primarily due to CLIP image encoder's \\\"erroneous agreements\\\" (where distinct images have high cosine similarity). Using LLaVA-1.5-7B as an example, they demonstrate that MLLMs can successfully extract distinct information from similar image embeddings, achieving high accuracy on tasks where CLIP performs poorly. This suggests that the limitation lies not in the image embeddings themselves, but in how effectively models extract and utilize the encoded information.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Provides compelling empirical evidence through controlled experiments across multiple benchmarks.\\nChallenges and refines an important assumption in the field about VLM limitations.\\nDemonstrates that existing architectures might be more capable than previously thought, just requiring better utilization strategies.\", \"weaknesses\": \"The paper's scope might be too focused on LLaVA-1.5 as the primary example, potentially limiting the generalizability of findings\\nWhile the paper shows that information can be extracted from similar embeddings, it doesn't fully tackle why LLaVA-1.5 is able to do this.\", \"questions\": \"How do these findings generalize to other MLLMs beyond LLaVA-1.5?\\nWhat specific mechanisms allow MLLMs to extract distinct information from seemingly similar embeddings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper investigates whether CLIP's image embeddings are truly the bottleneck for vision-language models' (VLMs) performance on visual reasoning tasks. Using LLaVA-1.5 as the primary example, the authors challenge the previous assumption that erroneous agreements in CLIP embeddings (high cosine similarity between visually distinct images) are responsible for VLMs' poor visual reasoning performance.\\n\\n### Strengths:\\n\\n1. Novel perspective challenging existing assumptions:\\n> \\\"delivers a valuable message to the community ... counters that claim, attributing the problem instead to the model not utilizing these visual features effectively\\\" - Reviewer DMVn\\n\\n2. Systematic experimental methodology: \\n> \\\"Demonstrates that existing architectures might be more capable than previously thought, just requiring better utilization strategies\\\" - Reviewer 2b8n\\n\\n3. Technically sound ablation studies:\\n> \\\"Important analysis shown in section 4 (Investigating the performance gap)... visual information extraction is the key factor in determining the performance gap\\\" - Reviewer 7vBu\\n\\n## Weaknesses:\\n\\n1. Limited scope and generalizability:\\n> \\\"WhatsUp, MMVP, COCO-spatial and GQA-spatial are not really well-known datasets... These datasets are not enough to reflect model reasoning and to come up with general conclusions\\\" - Reviewer DMVn\\n\\n2. Methodological concerns:\\n> \\\"The novel experiments are not comparable with OpenAI-CLIP-ViT-L/14-336 because, as the author outlined, they leverage different visual encoders\\\" - Reviewer fpLb\\n\\n3. Insufficient evidence for claims:\\n> \\\"I find the claim that the authors make not well-supported by experimental evidence. The benchmarks the authors used are not enough to support this claim\\\" - Reviewer DMVn\\n\\n### Justification\\n\\nDespite interesting insights and thorough revisions, fundamental concerns about methodology, generalizability, and evidence remain. All reviewers rated the paper below the acceptance threshold (three at \\\"marginally below\\\" and one \\\"reject\\\"), indicating the contribution does not meet the conference bar. Key issues include:\\n\\n1. Limited benchmark selection that doesn't fully support the claims\\n2. Methodological issues in model comparisons\\n3. Lack of evidence that findings generalize beyond LLaVA\\n\\n\\nWhile the paper raises important questions about VLM limitations and CLIP embeddings, the methodological concerns and limited scope make it unsuitable for acceptance at this time. The authors' revisions, while thorough, did not fully address these core issues as evidenced by maintained negative ratings from all reviewers.\", \"additional_comments_on_reviewer_discussion\": \"The authors made significant efforts to address reviewer concerns:\\n\\n1. Added experiments with additional models (LLaMA-3-V-8B, Phi-3-V-3.8B, InstructBLIP, Otter-Image)\\n2. Included NaturalBench evaluation for generalizability\\n3. Clarified methodology and scope\\n\\nHowever, none of the reviewers changed their scores after the revisions:\\n- fpLb maintained that experiments remain incomparable\\n- DMVn still found evidence insufficient\\n- 2b8n and 7vBu maintained their \\\"marginally below threshold\\\" ratings\"}" ] }
5DUekOKWcS
Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
[ "Guangchen Lan", "Dong-Jun Han", "Abolfazl Hashemi", "Vaneet Aggarwal", "Christopher Brinton" ]
To improve the efficiency of reinforcement learning (RL), we propose a novel asynchronous federated reinforcement learning (FedRL) framework termed AFedPG, which constructs a global model through collaboration among $N$ agents using policy gradient (PG) updates. To address the challenge of lagged policies in asynchronous settings, we design a delay-adaptive lookahead technique *specifically for FedRL* that can effectively handle heterogeneous arrival times of policy gradients. We analyze the theoretical global convergence bound of AFedPG, and characterize the advantage of the proposed algorithm in terms of both the sample complexity and time complexity. Specifically, our AFedPG method achieves $\mathcal{O}(\frac{{\epsilon}^{-2.5}}{N})$ sample complexity for global convergence at each agent on average. Compared to the single agent setting with $\mathcal{O}(\epsilon^{-2.5})$ sample complexity, it enjoys a linear speedup with respect to the number of agents. Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}({\sum_{i=1}^{N} \frac{1}{t_{i}}})^{-1}$, where $t_{i}$ denotes the time consumption in each iteration at agent $i$, and $t_{\max}$ is the largest one. The latter complexity $\mathcal{O}({\sum_{i=1}^{N} \frac{1}{t_{i}}})^{-1}$ is always smaller than the former one, and this improvement becomes significant in large-scale federated settings with heterogeneous computing powers ($t_{\max}\gg t_{\min}$). Finally, we empirically verify the improved performance of AFedPG in four widely used MuJoCo environments with varying numbers of agents. We also demonstrate the advantages of AFedPG in various computing heterogeneity scenarios.
[ "Reinforcement Learning", "Federated Learning", "Asynchronous System" ]
Accept (Poster)
https://openreview.net/pdf?id=5DUekOKWcS
https://openreview.net/forum?id=5DUekOKWcS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yf4s2H1Kzz", "y4VxfmLJFu", "qUF1DrMQbk", "pzRvPLVKh9", "o3rnjoFJKw", "l2bnMhulNS", "l0AErJErAM", "fOednYTANb", "dxMsobYf2H", "cvYObjiKno", "aLKjbqul8m", "SyYuRf2ZWg", "RMhLseW4BV", "ORwijJV3pA", "K1dexS7eAX", "FMspUkoUNx", "DSovwvYRSS", "B2AHFuyGBa", "AI49oQKVXM", "A6lIPFXqpS", "8XbQxumQZl", "5zY22JSSel", "5zDtzv9Zsq", "5O5oadaECo", "25abMJ1apS" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731039426320, 1732754747925, 1730666406564, 1733099104323, 1732408166769, 1732408592215, 1732657461766, 1733196065554, 1732422693770, 1737523504121, 1732638821694, 1732586327289, 1732411066156, 1732981151586, 1730757189449, 1732627662255, 1732409022918, 1734766775004, 1732705457960, 1744867050771, 1730710309174, 1732704991535, 1729697138056, 1732927163374, 1732925284483 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_YjHR" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_ui29" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_ui29" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_T8LV" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_YjHR" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_oswF" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_ui29" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_oswF" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_T8LV" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_KckT" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Area_Chair_Mro7" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "~Flint_Xiaofeng_Fan1" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_KckT" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Reviewer_oswF" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ], [ "ICLR.cc/2025/Conference/Submission2447/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper aims to enhance the efficiency of federated reinforcement learning (FedRL) by introducing an asynchronous framework, AFedPG, which leverages policy gradient (PG) updates from multiple agents without requiring synchronized updates. This approach is designed to address issues related to delayed updates and computational heterogeneity, which are common challenges in federated setups, especially with varying agent speeds and capacities.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Contributions claimed in the paper include,\\n\\n--Proposes a new asynchronous FedRL algorithm (AFedPG) tailored to policy gradient updates, using a delay-adaptive lookahead technique to manage lagging updates in asynchronous settings.\\n\\n-- Provides theoretical convergence guarantees, including global and first-order stationary point convergence, for the asynchronous federated policy-based RL.\\n\\n-- Achieves a linear speedup in sample complexity with an increasing number of agents, reducing the per-agent complexity from $O(\\\\epsilon^{-2.5})$ to $O(\\\\epsilon^{-2.5}/N)$. (However, the proof is unclear and it is hard to see how the authors can avoid a dependence on the delay in the sample complexity.)\\n\\n -- Improves time complexity over synchronous methods by reducing the dependency on the slowest agent\\u2019s computational time, with gains highlighted in scenarios of high computational heterogeneity.\\n\\n-- Empirically validates AFedPG's performance in various MuJoCo environments, demonstrating faster convergence (time-wise) over synchronous FedPG and other baselines.\", \"weaknesses\": \"In general, the paper is not clearly written. I don't see how the authors were able to avoid a dependence on the delay in their sample complexity. Their current derivations for bounding the error term (from the delay) have many typos and are hard to follow. Specific concerns/questions of the paper include:\\n\\n-- Step 4 in Algorithm 2 is confusing. Where does the local agent get $d_{k-1}$ from? Did the authors mean $d_{k-\\\\delta_k}$ instead? If the authors meant $d_{k-1}$, the current algorithm descriptions do not mention how $d_{k-1}$ can be made available to agent $i$.\\n\\n-- A major component of the proof is bounding the error term $e_k := d_{k-\\\\delta_k} - \\\\nabla J(\\\\theta_k)$, which arises from the delay. Equation (30) in the appendix provides a derivation of how $e_k$ can be expressed (and subsequently bounded). However, there seems to be serious typos in equation (30). For instance, in the first line, I am not sure why a term $d_{\\\\delta_{k-1}}$ appears, when $e_k$ is actually $d_{k-\\\\delta_k} - \\\\nabla J(\\\\theta_k)$. This makes it difficult to follow the argument in this derivation, and there is also no explanation of the derivation, which might have made it easier to follow the argument flow. Given that this is a particularly important term to bound to derive either first-order or global convergence rates, the authors should make an effort to clarify and explain these derivations.\\n \\n-- The current convergence bound seems to have no dependence on the delay in the network, which is $N$ in the worst-case (e.g. assuming cyclic update). This is somewhat confusing to me; intuitively, even with a delay-adaptive step size for the $\\\\theta$ update, there should be some price to pay for a cyclic delay structure. My current understanding is that perhaps the authors were able to bypass the dependence on the delay by their handling of the gradient-bias term $e_k$ (caused by the delay). However, given that the current derivation of bounding $e_k$ is highly unclear (see my earlier point), it is not clear to me whether the result as currently stated actually holds. If it holds, the authors should make it a lot clearer how and why they are able to avoid the dependence on the delay, as this is a key part of their contribution. \\n\\n-- The definition of the global time is unclear. The authors should make it more precise, and have a formal statement and proof of their current stated bound on the global time being $O(\\\\frac{\\\\bar{t}\\\\epsilon^{-2.5}}{N})$, where $\\\\bar{t} = \\\\frac{1}{\\\\sum_{i=1}^N \\\\frac{1}{t_i}}$. \\n\\n--On a related note, the definition of $t_i$ seems a little unclear to me, given that at different iterations, agent $i$ might require varying amounts of time (i.e. there shouldn't be a single time complexity $t_i$ for each agent $i$). The authors should make their definition of what they mean by $t_i$ more precise.\", \"questions\": \"See my questions from the previous section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reviewer esponse\", \"comment\": \"Thanks to the authors for further experiments. I think the results now look reasonable to me. Due to the fact that the authors address mostly my concerns, I would like to increase score from 5 to 6.\"}", "{\"summary\": \"This paper proposes a policy-based federated RL with an asynchronous setting to handle varying arrival times of policy gradient updates. Specifically, the authors analyzed the global and FOSP sample complexity as well as time complexity with a concrete algorithm design. The authors also provided simulation results on MuJoCo, which tackle sample and time complexity issues separately. The proposed method is more practical and can be adaptable to various computing heterogeneity scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Numerical experiments on MuJoCo demonstrate impressive results that support the better time complexity of the proposed method\", \"Both FOSP and global sample complexity match the state-of-the-art while the global time complexity can have a tighter bound with heterogeneous arrival times\"], \"weaknesses\": [\"The ultimate goal of federated RL is to find the trade-off between sample and communication complexity while the emphasis of this work on communication complexity/strategy is limited and not clear to me. Please elaborate more about what the threshold or event triggered for any agent to have the synchronization/communication with the server in your proposed framework.\", \"There are some typos in the manuscript. For example, you write *MoJuCo* instead of *MuJoCo* in the caption of Figures 3 and 4.\"], \"questions\": [\"In Line 268, you mention *the set of active agents*. Does it mean the agents that can apply global iteration? If so, then the following paragraph mentions that *only one gradient to update the model from the agent who has finished its local computation.* In other words, does it allow more than one agent to apply policy gradient at the same iteration?\", \"For Figure 3, could you please let PG (N=1) and AfedPG (N=2) train even longer to see if they can converge to a similar reward as the other two? If they cannot, I feel curious as to why they can't.\", \"Is there any analysis or experiment of communication cost?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Dear Reviewer KckT,\", \"We truly appreciate the responses. Here are the new results that we have added in the updated version as suggested:\", \"Re-state the algorithms with clear explanations.\", \"Time complexity analysis and explanations in Appendix A.1.\", \"More explanations with details in the proof in Appendix B.\", \"Reward performances with long runs in Appendix A.3.\", \"Communication overhead analysis and comparisons between asynchronous and synchronous settings in Appendix A.3.\", \"We would like to briefly state our key contributions here. We are the first work to analyze the policy gradient methods in FedRL with general function parameterization and achieve the SOTA global convergence rate. The proposed delay-adaptive lookahead technique mitigates second-order errors in practical settings with lagged gradients, enabling guaranteed convergence in asynchronous FedRL -- a challenge not addressed in existing works. We improve the performances in communication cost and time consumption in an asynchronous setting, which is new.\", \"Do you have any specific aspects that we can clarify and further improve? Thank you so much for the suggestion!\"]}", "{\"comment\": \"Thank you for reading the paper carefully! Hopefully, we have answered all your questions. If there are any other concerns, we would be happy to answer them.\\n\\n---\\n\\n**W1.** \\nWe are sorry about the confusion. In the original algorithms, the $k$ on the server and agent sides are not the same one. To avoid confusion, we have reconstructed the algorithm description (Algorithm 1 and 2), while keeping all notations unchanged in the theoretical analysis. In the new description, a simple plugin between the equations will lead to the theoretical results. We hope this could help.\\n\\n---\\n\\n**W2.** \\nThank you so much for bringing this! It is indeed a typo. It should be $d_{k-1-\\\\delta_{k-1}}$ instead of $d_{\\\\delta_{k-1}}$. We have fixed the typo, and it should be consistent now.\\n\\nWe have $e_{k} = d_{k - \\\\delta_{k}} - \\\\nabla J(\\\\theta_{k})$ and $d_{k - \\\\delta_{k}} = (1 - \\\\alpha_{k - \\\\delta_{k}}) d_{k-1 - \\\\delta_{k-1}}$\\n $+ \\\\alpha_{k - \\\\delta_{k}}$ \\n$g(\\\\tau_{k - \\\\delta_{k}}, \\\\theta_{k - \\\\delta_{k}})$. (It is unable to show \\\\widetilde in $g(\\\\cdot)$ here.)\\n\\nCombining them, we have \\n$e_{k} = (1 - \\\\alpha_{k - \\\\delta_{k}}) d_{k-1 - \\\\delta_{k-1}} - \\\\nabla J(\\\\theta_{k}) + \\\\alpha_{k - \\\\delta_{k}}$ $g(\\\\tau_{k - \\\\delta_{k}}, \\\\theta_{k - \\\\delta_{k}})$. \\nThen, we can have the result in equation 31, and construct the recursive form.\\n\\n---\\n\\n**W3.**\\nIn the worst case, the largest delay could be infinite (not communicate at all), while the average delay is upper bounded by $N$. \\n\\nWe have a very interesting result in AFedPG. We do not make any assumptions about the delay, and the largest delay is allowed to be infinite. If some agents do not communicate at all, we simply lose their computation resources. The convergence rate is upper bounded by the average delay $\\\\bar{\\\\delta}$, and the average delay is bounded by $N$ in Lemma B.10. In equation 37, the term with the average delay is much smaller than the dominated term $\\\\mathcal{O}(K^{-2.5})$ ($K$ could reach an order of magnitude of six or even larger.), and thus, does not hurt the convergence rate.\\n\\n---\\n\\n**W4.**\\n Thank you for the suggestion! We have added the explanation with detailed derivation in Appendix A.1. It should be very easy to understand.\\n\\n---\\n\\n**W5.**\\nAs the agent has the same computation requirement in each iteration (The number of collected samples is the same.), we assume that the time complexity in each iteration is the same. Sorry about the confusion. We have added this statement in line 392.\"}", "{\"comment\": \"**W1.**\\nThank you so much for this suggestion! The communication overhead was not the target of this paper. However, after we accepted your valuable advice and dived deep into the analysis, we found that AFedPG actually has advantages compared to FedPG. We added communication overhead results in Appendix A.3, and answered Q3. We hope this could address your concerns.\\n\\n---\\n\\n**W2.** \\nThank you! We have fixed the typos with another round of proofreading.\\n\\n---\\n\\n**Q1.**\\nIn the asynchronous setting, concurrency roughly means the number of machines that are computing. In the synchronous setting, the server waits until it receives policy gradients from all $N$ agents, while in the asynchronous setting, the server runs as soon as one policy gradient is received. In a continuous space (time), it has measure zero that two discrete time spots are the same. Thus, only one agent has the possibility to be applied on the server, while the others keep computing.\\n\\n---\\n\\n**Q2.**\\nIt takes a very long time to wait for them to converge to the optimal point, especially when the state and action spaces are large. We only use these experiments to show the benefit of more agents. To answer the question and verify this hypothesis, we test the result on the Swimmer-v4 task. They can converge to a similar reward, while the variance (shadow area) is larger. We assume the federated setting has the potential ability to reduce the variance when more agents engage, which could be an interesting direction for future study.\\n\\n---\\n\\n**Q3.**\\nThank you for the suggestion! We have added the analysis of experiments in Appendix A.3 to further enhance it. We briefly state the key points here.\\n\\nIn theory, the synchronous setting and asynchronous settings have the same cumulative communication cost as the sample complexities are the same, and each agent collects the same amount of samples in each update. Our analysis shows that the AFedPG could keep the SOTA sample complexity in policy gradient methods in RL in Table 1. Moreover, when we analyze the training process through the lens of fine-grained time, AFedPG shows several advantages.\\n\\nOn the server side, in FedPG, it is noticeable that the communication mainly happens in a short time window in the downlink process, which brings a heavy burden with a peak, especially in the resource constraint scenario. In AFedPG, the communication is more evenly distributed, which does not have a peak burden and has less requirement for communication resources.\\n\\nOn the agent side, in FedPG, all agents have the same communication burden. In AFedPG, the leader agent (fast) has more communication burden $\\\\mathcal{O}(t_{\\\\max})$, while the straggler agent (slow) has less communication burden $\\\\mathcal{O}(t_{\\\\min})$. This is a more rational allocation compared to the equality setting regardless of the heterogeneous computing resources. In practice, agents have heterogeneous resources, while FedPG does not consider this and evenly distributes the burden. The equal allocation could bring too much burden for the slow ones. In AFedPG, it shifts the burden to the fast ones naturally.\\n\\nWe hope this section could address your concerns.\"}", "{\"comment\": \"Thank you for your detailed responses. I still have a few questions:\\n \\n1. In [3], the policy is also parameterized using General Policy Parameterization (Deep RL), rather than a linear parameterization.\\n\\n2. Regarding FL, [6] considers multiple local steps at local agents when considering asynchrony. Since FL aims to reduce communication costs, I still believe that this paper does not fundamentally operate within a federated framework.\\n\\n[6] Anarchic Federated Learning, ICML2022.\\n\\nBased on your response, I have increased my rating for the paper.\"}", "{\"comment\": \"I thank the reviewer for their significant efforts in revising the paper. The comments have mostly addressed my initial concerns. I thus raise the score to 6.\"}", "{\"comment\": \"Thank you for the suggestions in general. We sincerely apologize for the confusion. If there are any other questions, we would be happy to answer them.\\n\\n---\\n\\n**W1.**\\nThe average among local updates is a senario in the conventional synchronous setting. In the asynchronous setting, the agent sends the update to the server as soon as it finishes the computation. In most conventional federated learning works, only the synchronous setting is considered. We aim to extend the scope to the asynchronous setting.\\n\\n---\\n\\n**W2.**\\nThanks for the suggestion! We admit that the **Actor-Critic** (AC) method in [3] (not PG) contains novel contributions in theoretical analysis. \\n\\nHowever, it only includes linear parametrization, which is a very weak and unpractical result (**See W3**). With a general function parametrization, e.g., neural networks (Deep RL), the SOTA result (even in the single-agent setting) of the AC method is $\\\\mathcal{O}({\\\\epsilon}^{-3})$ [4]. It is worse than the SOTA result in policy gradient, which is $\\\\mathcal{O}({\\\\epsilon}^{-2.5})$. We aim to compare with the best results, and thus, choose to analyze PG methods.\\n\\n---\\n\\n**W3.**\\n\\n1. [3] only has **Linear Parametrization** (Deep RL is not included.), which is a very weak result with limited meaning. With a General Function Parametrization, e.g., neural networks (Deep RL), there is no such a result. We consider a general and practical setting with a General Function Parametrization. \\n - Even in the single-agent setting (without federated agents), the SOTA result of the **AC method is $\\\\mathcal{O}({\\\\epsilon}^{-3})$ [4]** and the previous approach is $\\\\mathcal{O}({\\\\epsilon}^{-6})$ [5], which is still worse than our $\\\\mathcal{O}({\\\\epsilon}^{-2.5})$. In the federated setting, there is no result that achieves $\\\\mathcal{O}({\\\\epsilon}^{-3})$ for AC methods. Moreover, with a general function parameterization, we compare the performances of their A3C in Figure 4, which is much worse.\\n\\n2. [3] relies on a strong and unpractical assumption, Assumption 2. It assumes that the largest **delay is bounded** by a constant $K_{0}$. However, in practice, the slowest agent may not communicate with the server, and thus, has an infinite delay. In our analysis, we do not require any boundary for the largest delay, because we only contain the average delay in the convergence rate, and the average delay is naturally bounded by the number of agents $N$ in Lemma B.10 (our corollary).\\n\\n3. [3] is an AC method with **extra value networks**, which requires much more computation and memory cost compared to the pure policy gradient method. Thus, the fine-tuning of Gemini and GPT4 uses policy gradient methods **instead of AC** methods.\\n\\n---\\n\\n**Q4.**\\nWe aim to train a global policy in FedRL. The theorems give the convergence rates for the policy model w.r.t. the number of global updates on the **server**. It has **no relationship** with the index of agents. We achieve a result that the global convergence rate is upper bounded by the average delay, and the average delay is bounded by the number of agents $N$. This is new in FedRL. If there are any other questions, we would be happy to answer them.\\n\\n---\\n\\n**Q5.**\\nWe are the first work to analyze federated policy gradient methods with General Policy Parameterization, e.g., neural networks (Deep RL), for a Global Convergence, and achieve the SOTA results. \\n\\nEven without the asynchronous setting, there is NO previous FedPG work that achieves a Global Convergence with General Policy Parameterization (Deep RL) with the SOTA rates.\\n\\nBeyond that, we design the asynchronous method and further improve the performances, which is the main contribution in this paper. Without our lookahead techniques, there is no global convergence rate in AFedPG with a general parameterization setting, because the second-order errors are hard to bound. We find that the asynchronous method (with our techniques) is not just as good, but better. \\n\\n---\\n\\n**Q6.**\\nThank you for the suggestion. We have changed the order in the main paper.\\n\\n---\\n\\n[4] Closing the Gap: Achieving Global Convergence (Last Iterate) of Actor-Critic under Markovian Sampling with Neural Network Parametrization. ICML 2024.\\n\\n[5] Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy. ICLR 2021.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to the authors\", \"comment\": \"Thanks to the authors for their detailed replies and revision. My mentioned issues are fixed, and the algorithm is now in a clear state. Overall, I believe the heterogeneous updating scheme is very interesting. Such technical analysis is new to the existing literature and is of broader interest to the ICLR audience.\\n\\n\\nThus, I increase my rating for Presentation to 3 and keep the overall rating of 6.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thanks the authors for answering all my questions and further proofreading. However, I am still not quite satisfied with Q2, especially I assume that PG (N=1) is just a conventional approach that you may at least get the convergence eventually, which should already be shown in prior practical works. The authors mentioned that they tested the result on the Swimmer-v4 task and they can converge to a similar reward for PG (N=1) and AfedPG (N=2). Could you please point me out if you have updated in your manuscript? Also, it seems that it is especially hard to tell if Humanoid-v4 can converge to the optimal point in Figure 3 for PG(N=1) due to the fact that it is already gradually converging.\"}", "{\"comment\": \"Thank you for bringing the valuable work in the synchronous setting! We hope that we have addressed your concerns. If there are any other questions, we would be happy to answer them.\\n\\n---\\n\\n**W1.**\\nWe are the first work to analyze federated policy gradient methods with **General Policy Parameterization**, e.g., neural networks (Deep RL), for a **Global** Convergence, and achieve the SOTA results. \\n\\nEven without the asynchronous setting, there is no previous FedPG work that achieves a global convergence with general policy parameterization with the SOTA rates.\\n\\nBeyond that, we design the asynchronous method and further improve the performances, which is the main improvement and contribution in this paper.\\n\\nWe also list 4 points in W2, which makes the novelty more clear and easy to understand. If there are any other concerns, we would be happy to answer them.\\n\\n---\\n\\n**W2.**\\nThank you for bringing this synchronous work! It is indeed valuable and has several contributions to theoretical analysis. We have cited it in the updated version as it improves the results in FedRL. However, except in the synchronous setting, there are several main differences in terms of completeness and methodology:\\n\\n1. We achieve a Global Convergence with the SOTA rate, while that work (FedM for short) only focuses on stationary point convergence. It seems that FedM cannot go through the global convergence analysis without new techniques.\\n\\n2. We have General Function Parameterization, e.g., neural networks (Deep RL), results in both theory and experiments. That paper does not analyze the influence of general function parametrization, e.g., Deep RL, in theory. The general function approximation might bring higher sample complexity, especially for the global convergence.\\n\\n3. In their setting, the local policies will not be in different steps (Fixed to the hyperparameter $K$ in their paper). In our asynchronous setting, we do not require that each agent is in the same step, which brings the second-order error term in equation 31 and unbounded variance. Thus, we design the lookahead mechanism to cancel the second-order errors, normalized updates to bound the variance, and thus, secure the convergence. This technique is new and unique.\\n\\n4. Their momentum design is NOT the delay-adaptive lookahead method in our paper. \\n\\n - The momentum is used for gradient estimation (not sampling) at Step 5 in Algorithm 1. We did not state this momentum part is a novelty in our method. Our delay-adaptive lookahead method is at Step 8 in Algorithm 1 for trajectory ''sampling'' at Step 2 in Algorithm 2. This is new and has never been used before in any FedRL works. \\n\\n - On the other hand, the orders are different. In momentum methods, the momented point is ''between'' the old ($k-1$)-th and the new $k$-th points. In our delay-adaptive lookahead method, the new point $k$-th is ''between'' the old ($k-1$)-th and the momented point. This is the reason that we call it ''lookahead''.\\n\\n - Roughly speaking, we have two momentums, but unlike the definition of the momentum gradient descent, the sampling one is not a conventional momentum.\"}", "{\"title\": \"Further reply to authors\", \"comment\": \"Thanks to the authors for their further improvement of the paper. In the current version, the numerical section and the proof are in a better state. Figure 7 clearly highlights the advantage of the asynchronous method when applied to agents with heterogeneous efficiency. Thus, I will further increase my rating accordingly.\"}", "{\"summary\": \"This paper proposes an asynchronous federated reinforcement learning framework. Then it introduces a delay-adaptive lookahead technique and employs normalized updates to integrate policy gradients to deal with the challenges brought by the asynchrony. Furthermore, the paper provides the theoretical global convergence bound. The experiments verify the improved performance of the proposed algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Convergence results are provided.\\n2. Asynchronous federated reinforcement learning framework is proposed.\", \"weaknesses\": \"1. This paper is not built on a federated framework. FedRL is designed to address heterogeneous environments and allow local agents to perform multiple iterations [1,2]. However, these are not considered in this paper.\\n\\n[1] Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments, ICML24.\\n[2] Federated Reinforcement Learning with Environment Heterogeneity, AISTATS22.\\n\\n2. This work lack necessary comparisons with current works. Actor-critic is a policy-based approach. This paper needs careful comparisons in details with [3] since both emphasize the asynchrony, not mentioned in Introduction briefly.\\n[3] Towards understanding asynchronous advantage actor-critic: convergence and linear speedup.\\n\\n3. Technical contributions are limited. Authors claimed that even if all agents have an identical environment, each agent collects samples according to different policies because of the delay. This dynamic nature makes both the problem itself and the theoretical analysis challenging. However, this is somehow solved by [3]. The challenges brought by the features of Fed RL are not considered in this paper.\", \"questions\": \"4. Why does Proof of theorems lack the index of agent i? Since the server does not aggregate gradients or parameters from agents periodically, Fed RL is not applicable in this paper. Besides, it is just similar to [3]. Notations also make confusing.\\n\\n5. What\\u2019s the technical contributions beyond existing FedRL? Technical differences of AFedPG compared to FedPG seems limited.\\n\\n6. Authors first get the results of global convergence, then FOSP results. Why FOSP results are placed first in main text?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' clarifications. After carefully reading their responses and the other reviews, I remain unconvinced about the paper's contribution. I will maintain my score.\"}", "{\"comment\": \"Thank you for reading the paper carefully! If there are any other concerns, we would be happy to answer them.\\n\\n---\\n\\n**W1.** Section 4.\\n\\nWe have added more explanation of $d_{k-1}$ and $\\\\theta_{k-1}$, and reconstructed our algorithms (Algorithm 1 and 2) to clearly show the process. We keep all notations unchanged in the theoretical analysis.\\n\\nWe have added literature explanations in the Introduction.\\n\\nFor trajectory sampling, it is indeed not practical to sample the whole trajectory. However, with the discount factor $\\\\gamma$, the variance is exponentially small in the horizon [1,2]. The truncation does not influence the convergence results, and thus, the infinite horizon setting [3] is widely used in the recent analysis.\\n\\n---\\n\\n**W2.** Section 5.\\n\\nThank you for the remainder! Though the equations are correct in the theorems in the appendix, we had a typo in its simplified version in equation 10. We have fixed the typo in the main paper.\\n\\nWe have added the derivation with details in Appendix A.1. The line 394 could be well explained according to Appendix A.1.\\n\\n---\\n\\n**W3.** Appendix B.\\n\\nThanks for the suggestion! We have added the explanations of the conventional notations in Appendix B.1.\\n\\nWe have added the exact positions of the cited lemmas (Lemma B.7 and B.8).\\n\\nWe have fixed all typos with more rounds of proofreading.\\n\\n---\\n\\n[1] On the Theory of Policy Gradient Methods: Optimality, Approximation, and Distribution Shift. Journal of Machine Learning Research 2021.\\n\\n[2] An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods. NeurIPS 2020.\\n\\n[3] Improved Sample Complexity Analysis of Natural Policy Gradient Algorithm with General Parameterization for Infinite Horizon Discounted Reward Markov Decision Processes. AISTATS 2024.\"}", "{\"metareview\": \"This paper introduces AFedPG, an asynchronous framework for federated reinforcement learning (FedRL), which utilizes policy gradient (PG) updates from multiple agents without requiring synchronized updates. The approach is specifically designed to address challenges in federated setups, such as delayed updates and computational heterogeneity due to varying agent speeds and capacities. The authors provide theoretical results, including first-order and global convergence rates, and conduct simulation experiments on MuJoCo to demonstrate improvements in sample and time complexity.\\n\\nThe reviewers raised several common concerns, including the need for clarification of notation and presentation, the absence of a communication cost analysis, and the novelty of the techniques. In their rebuttal, the authors successfully addressed these concerns. They clarified misunderstandings and corrected typos, leading to improved reviewer scores. Additionally, the inclusion of communication cost comparisons in the experiments strengthened the paper\\u2019s relevance to the federated setting. Regarding technical novelty, the authors convincingly argued that there were no prior results on the convergence of federated policy gradients for general policy parameterizations. Furthermore, their theoretical results outperform those of single-agent settings by achieving a linear speedup. These contributions, combined with the clarification of previously ambiguous points, make the paper a valuable addition to the FedRL literature. Based on these factors, I recommend weak acceptance of this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised several common concerns, including the need for clarification of notation and presentation, the absence of a communication cost analysis, and the novelty of the techniques. In their rebuttal, the authors successfully addressed these concerns.\"}", "{\"comment\": \"Thank you so much for the endorsement!\\n\\nWe have uploaded a new version with more results and improvements based on the valuable suggestions from the reviewers.\", \"here_are_the_results_that_we_have_added\": [\"Communication overhead analysis and comparisons between asynchronous and synchronous settings in Appendix A.3.\", \"Reward performances with long runs in Appendix A.3.\", \"More details in the proof in Appendix B, e.g., the derivation of equation 34 with details, and the explanation of equation 36.\", \"Time complexity analysis in Appendix A.1.\"], \"highlight\": \"We are the first work to analyze asynchronous policy gradient methods with general function parameterization, and we achieve the SOTA sample complexity for global convergence.\\n\\nWe hope this would help with the evaluation.\"}", "{\"title\": \"Recommendation to include FedPG\\u2011BR (NeurIPS\\u00a02021)\", \"comment\": \"We congratulate the authors on introducing AFedPG, a novel asynchronous federated RL algorithm with rigorous convergence guarantees under heterogeneous delays. We respectfully note that FedPG\\u2011BR (NeurIPS\\u00a02021):\\n\\n1. Was the first FedPG framework to prove a formal sample\\u2011complexity bound in the settings similar to AFedPG,\\n\\n2. Additionally provides Byzantine resilience, tolerating up to 50\\u00a0% adversarial agents.\\n\\nA brief citation and discussion of FedPG\\u2011BR in the related work would further strengthen the paper\\u2019s scholarly context. We hope the authors will kindly consider including this reference to acknowledge its foundational role in federated policy gradients.\\n\\nCheers,\\nFlint\", \"https\": \"//flint-xf-fan.github.io/\"}", "{\"summary\": \"This work investigates federated reinforcement learning with asynchronous synchronizations to improve the time complexity. They introduce the asynchronous federated policy gradient (AFedPG), which tackles lagged policies using a delay-adaptive lookahead. In addition, they present a sample complexity analysis of the algorithm, demonstrating a linear speedup compared to the single-agent scenario.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The work provides asynchronous synchronization updates tailored for federated RL.\\n2. The work presents a tight sample complexity analysis of the proposed algorithm, demonstrating a linear speedup that aligns with the single-agent state-of-the-art.\", \"weaknesses\": \"1. The application of asynchronous updates from federated learning to federated policy gradients appears to be incremental, especially since much of the supervised federated learning literature has examined how to manage lagged models, while existing federated reinforcement learning research focuses on addressing the dynamic nature of reinforcement learning in federated settings.\\n2. It appears that a momentum method was introduced for federated policy gradients in heterogeneous environments to handle online sample collections dependent on $\\\\theta$ in [1]. While the paper emphasizes its novelty by discussing the momentum design (delay-adaptive lookahead), which differs from asynchronous supervised federated learning, it remains uncertain whether this concept is genuinely unique in comparison to prior literature in federated reinforcement learning, which also addresses the issue of online sample collections that vary with policy updates.\\n\\n[1] Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments, Wang et al., ICML 2024\", \"questions\": \".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely apologize for the confusion, and the claim before showing results. We did not put the partial results in the previous version. We have uploaded a new version, which includes the complete results in Appendix A.3.\\n\\nIn Appendix A.3, it contains descriptions with details. We briefly state here. For Swimmer-v4, Hopper-v4, and Walker2D-v4 tasks, PG achieves similar rewards with AFedPG in Figure 9. For Humanoid-v4, in some runs, PG achieves the optimal rewards (shadowed area), while in some runs, PG does not achieve the reward compared to our AFedPG, because of the hyperparameter tuning of PG, e.g., learning rates. \\n\\nThe hyperparameter tuning of PG has no contribution to our main claim. Recall that we aim to use these experiments to verify the speedup effect in AFedPG in Table 1. The suboptimal hyperparameters of PG in the Humanoid-v4 task do not influence the conclusion: The more agents in AFedPG, the faster the optimal reward will be achieved.\\n\\n---\\n\\nCompared to the original version, we have added \\n- Communication overhead analysis in Appendix A.3. \\n- Reward performances with long runs in Appendix A.3.\\n- More details in the proof in Appendix B, e.g., the derivation of equation 34 with details, and the explanation of equation 36. \\n- Time complexity analysis in Appendix A.1. \\n\\nOn the other hand, we are the first work to analyze asynchronous policy gradient methods with general function parameterization, and we achieve the SOTA sample complexity for global convergence.\\n\\nWe hope this would help to address the main concerns. We appreciate a re-evaluation based on the improvement under your suggestions.\"}", "{\"summary\": \"This paper proposes an asynchronous federated reinforcement learning framework termed AFedPG for the policy gradient algorithm. It designs a delay-adaptive lookahead technique that can effectively handle heterogeneous arrival times of policy gradients. This work shows theoretical linear speedup in terms of the norm for policy gradient and verifies the speedup effect numerically.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed framework handles the delayed arrival of policy-gradient and reduces the waiting time compared to the algorithm for the homogeneous setting.\\n\\n2. The authors propose their special step size designs to cancel out a second-order error term when conducting the error analysis, which serves as a technical novelty.\\n\\n3. Numerical experiments demonstrate that the authors accelerate the training process compared to the synchronous algorithm.\", \"weaknesses\": \"1. Issues in Section 4. The authors are encouraged to explain more about the concepts of active agents, concurrency, and delay. In algorithm 2, the authors are encouraged to explain more details about model sharing from the central server as how the agents hold $d_{k-1}$ and $\\\\theta_{k-1}$ is not explicitly explained. In addition, the authors are encouraged to explain the relationship between their algorithms and the single-agent and homogeneous counterparts in the literature. Last, the authors assume that the agents can sample a trajectory with infinite lengths, which is impossible in practice. The authors are recommended to explain more on such assumptions.\\n\\n2. Issues in Section 5. (a) In equations 10 and 11, RHS contains a constant term that does not depend on $K$, which originates from the function approximation error as indicated in the appendix. The authors are encouraged to explain this term in the main paper. (b) The authors are encouraged to explain how they get the total waiting time in line 394.\\n\\n3. Issues in Appendix B (proofs). (a) The authors are encouraged to explain more about the definitions and notations that are already established in the literature, for example, $F_\\\\rho(\\\\theta),\\\\mu_F,\\\\sigma_g$. (b) In Lemmas B.6 and B.7, the authors are recommended to point out the cited lemma in the references. (c) The second term in line 1084 should be $(\\\\mathbb{E}\\\\cdot^2)^{1/2}$. (d) In equations 37 and 38, there are typos related to $\\\\nabla$. (e) In line 1028, there is a typo related to $d_{\\\\delta_{k-1}}$.\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe have uploaded a new version with more results and improvements based on your valuable suggestions.\\n\\nHere are the results that we have added, as suggested by you:\\n- Re-state the algorithms with clear explanations.\\n- Time complexity analysis and explanations in Appendix A.1.\\n- More explanations with details in the proof in Appendix B.\", \"some_main_results_suggested_by_the_others\": [\"Reward performances with long runs in Appendix A.3.\", \"Communication overhead analysis and comparisons between asynchronous and synchronous settings in Appendix A.3.\", \"We hope this would answer your questions and address your concerns. We would truly appreciate it if you could re-evaluate the work accordingly.\"]}", "{\"comment\": \"We truly appreciate the re-evaluation.\\n\\nIn [3], it has a general parameterization for FOSP only. For the global convergence (Theorem 5), the critic is Linear (not deep RL), and the policy is softmax (not general) for discrete and finite action and state spaces only, which should not be restricted in deep RL. In deep RL (AC methods), the results are analyzed in [4] and [5] in the single-agent setting.\\n\\nWe hope this would answer the question about the settings in previous RL works.\"}" ] }
5DT0t5NylU
Robin3D: Improving 3D Large Language Model via Robust Instruction Tuning
[ "Weitai Kang", "Haifeng Huang", "Yuzhang Shang", "Mubarak Shah", "Yan Yan" ]
Recent advancements in 3D Large Language Models (3DLLMs) have highlighted their potential in building general-purpose agents in the 3D real world, yet challenges remain due to the lack of high-quality robust instruction-following data, leading to limited discriminative power and generalization of 3DLLMs. In this paper, we introduce Robin3D, a powerful 3DLLM trained on large-scale instruction-following data generated by our novel data engine, Robust Instruction Generation (RIG) engine. RIG generates two key instruction data: 1) the Adversarial Instruction-following data, which features mixed negative and positive samples to enhance the model's discriminative understanding. 2) the Diverse Instruction-following data, which contains various instruction styles to enhance model's generalization. As a result, we construct 1 million instruction-following data, consisting of 344K Adversarial samples, 508K Diverse samples, and 165K benchmark training set samples. To better handle these complex instructions, Robin3D first incorporates Relation-Augmented Projector to enhance spatial understanding, and then strengthens the object referring and grounding ability through ID-Feature Bonding. Robin3D consistently outperforms previous methods across five widely-used 3D multimodal learning benchmarks, without the need for task-specific fine-tuning. Notably, we achieve a 7.8\% improvement in the grounding task (Multi3DRefer) and a 6.9\% improvement in the captioning task (Scan2Cap).
[ "3D Large Language Model", "3D Multimodal Learning" ]
https://openreview.net/pdf?id=5DT0t5NylU
https://openreview.net/forum?id=5DT0t5NylU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYJT7bbGXB", "rDKL6UmeY1", "oZG7Sk5OhE", "lyGpw8ZaUk", "izhT7dKTW5", "DN6QMvAEYo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730540574538, 1730454105916, 1731558354134, 1731043688382, 1730271142613, 1730428876168 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3448/Reviewer_NEzg" ], [ "ICLR.cc/2025/Conference/Submission3448/Reviewer_D2SR" ], [ "ICLR.cc/2025/Conference/Submission3448/Authors" ], [ "ICLR.cc/2025/Conference/Submission3448/Reviewer_Jaxq" ], [ "ICLR.cc/2025/Conference/Submission3448/Reviewer_RG9D" ], [ "ICLR.cc/2025/Conference/Submission3448/Reviewer_jF4G" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces Robin3D, a powerful 3D Large Language Model (3DLLM) trained using large-scale instruction-following data generated by an innovative Robust Instruction Generation (RIG) engine to address the lack of robustness and diversity in current 3DLLMs' training data.\\n\\nBesides, Robin3D incorporates two important modules: Relation-Augmented Projector (RAP) and ID-Feature Bonding (IFB). RAP enhances the model's understanding of spatial relationships between objects, while IFB strengthens the connection between object IDs and features, improving the model's referring and grounding capabilities, enabling it to better handle complex instructions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Reasonable motivation\\n - Expands existing ScanNet 3D text annotations through the data engine.\\n2. Strong experimental results\\n - Demonstrates excellent performance.\\n3. Clear and complete paper writing.\", \"weaknesses\": \"I have some questions about this paper that need further discussion. Please see them below.\\n\\nIf the authors can address my concerns, I am willing to raise my score.\", \"questions\": \"1. The motivation in the paper is somewhat mixed. Although it emphasizes pre-training with adversarial samples, it also highlights improvements through the Relation-Augmented Projector (RAP) and ID-Feature Bonding (IFB), which may seem like an attempt to pad contributions.\\n\\n2. The ablation study shows that RAP and IFB contribute less to 3D QA (with low improvements in Table 3's ScanQA and SQA3D) but significantly help 3D grounding. Can the authors explain why?\\n\\n3. The paper lacks details on the prompts used for Adversarial Data Generation and the data creation process. Is the input for adversarial samples only the ground truth prompt?\\n\\n4. The ablation for Adversarial Data is insufficient, making it unclear whether the performance improvement is due to the increase in data volume or specifically from the adversarial samples.\\n\\n5. The authors should compare methods like VLM-Grounder[1] and Coarse Correspondences[2] using video as a modality.\\n\\n6. Should the authors consider extending their approach to non-ScanNet scenes?\\n\\n7. The pre-training work should provide training configurations and training time.\\n\\n8. Can the proposed RIG be extended to the point level to enhance point-level LLM performance, such as with PointLLM [3] or GPT-4Point [4]? Additionally, could it be generalized to outdoor 3D LLMs like DriveLM [5] or LiDAR-LLM [6]? It would be beneficial for the authors to discuss this in the paper.\\n\\n[1] A VLM Agent for Zero-Shot 3D Visual Grounding\\n\\n[2] Coarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language Model\\n\\n[3] PointLLM: Empowering Large Language Models to Understand Point Clouds\\n\\n[4] GPT4Point: A Unified Framework for Point-Language Understanding and Generation\\n\\n[5] DriveLM: Driving with Graph Visual Question Answering\\n\\n[6] LiDAR-LLM: Exploring the Potential of Large Language Models for 3D LiDAR Understanding\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper targets the interesting problem of instruction tuning on 3D LLMs. As there lacks sufficient dataset, the paper introduces a new 1M instruction-tuning datset, which contains 344K adversarial samples, 508K diverse samples as well as 165K benchmark training set samples. Based on the dataset, the proposed algorithm, called Robin3D, obtains promising results in the ground task as well as caption task.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper targets the challening problem of 3D LLM for ground task as well as caption task.\\n2. To address the problem, the paper presents a robust instruction generation engine and 1M instruction-following data has been presented.\\n3. The paper obtains promising experimental results on five 3D multimoal learning benchmarks.\", \"weaknesses\": \"1. Will the 1M 3D instruction dataset be release to the public? The main contribution of the paper lies on the datasets, thus whether the dataset will be released to public is important to evaluate the contribution of the paper.\\n2. The dataset seems to be designed specifially for the 3D indoor environment. How about the generation ability of the dataset and the model used for the outdoor environment, like the 3D street?\\n3. Is it possible to provide an ablation study on different of training examples? It would be better to know the model performance with different number of training data.\\n4. The model is based on Vicuna-7B-v1.5 backbone. How about the performance if other LLM models are utilized? Besides, if larger LLM model is utilized, is a larger training dataset can further boost the performance?\", \"questions\": \"Please address the questions raised in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces Robin3D, a 3D large language model trained to follow instructions in 3D environments using the Robust Instruction Generation (RIG) engine, which creates a one-million-sample dataset. RIG generates Adversarial and Diverse instruction data to improve Robin3D\\u2019s discriminative power and generalization. Robin3D employs a Relation-Augmented Projector for spatial understanding and IDFeature Bonding for object grounding, achieving notable improvements over previous models, including a 7.8% gain in grounding and 6.9% in captioning without task-specific fine-tuning.\\nWhile Robin3D performs impressively across multiple ScanNet benchmarks, as noted in the weaknesses, some concerns remain regarding its network architecture and experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022 The paper introduces a large-scale 3D scene-instruction dataset that includes diverse instruction types, integrating varied instruction styles, existing benchmark instructions, and challenging adversarial instructions, enhancing the model\\u2019s robustness and generalization.\\\\\\n\\u2022 It proposes novel architectures that effectively leverage both 2D and 3D object-centric features, enabling richer spatial understanding and stronger object-grounding capabilities in complex 3D environments.\", \"weaknesses\": \"\\u2022 Relies on off-the-shelf 3D instance segmentation models trained on ScanNet with closed-set categories. I recommend the authors to consider Segment3D [1] for open-vocab, class-agnostic segmentation.\\\\\\n\\u2022 Cropping instance-level point clouds and applying object-level 3D point cloud CLIP (Uni3D) can limit the receptive fields and be computationally heavy. I recommend the authors to try scene-level CLIP (OpenScene [2], RegionPLC[3]) and then cropping the output features.\\\\\\n\\u2022 Table 1 reports only traditional NLP metrics (e.g., BLEU, CIDEr, METEOR, Rouge). I recommend the authors include LLM-based evaluation (e.g., GPT or Mistral) for better alignment with human assessment.\\\\\\n\\u2022 The experiments are limited to the ScanNet dataset. I recommend that the authors expand to other datasets (e.g., SceneVerse [4]) for broader evaluation.\\n\\n[1] Huang et al., \\\"Segment3D: Learning Fine-Grained Class-Agnostic 3D Segmentation without Manual Labels\\\", ECCV, 2024.\\\\\\n[2] Peng et al., \\\"OpenScene: 3D Scene Understanding with Open Vocabularies\\\", CVPR, 2023.\\\\\\n[3] Yang et al., \\\"RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding\\\", CVPR, 2024.\\\\\\n[4] Jia et al., \\\"SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene Understanding\\\", ECCV, 2024.\", \"questions\": [\"Table 1 does not clarify what kinds of LLM each method uses but it is worth doing that since multimodal LLM performance usually depends on LLM performance. \\\\\", \"What is the baseline in Table 3? I couldn't find out the network architecture of the baseline.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper first constructs 1 million instruction-following data, including adversarial samples and diversified samples to bridge the drawbacks of existing 3D MLLM instruction following fine-tuning datasets. To better handle the proposed complex instructions, this paper first incorporates Mask3D and Relation-Augmented Projector to enhance spatial understanding, and then improve the object referring and grounding ability through ID-Feature Bonding. The trained model Robin3D shows superior performance across five widely used 3D multimodal learning benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper constructs a large instruction-following fine-tuning dataset containing adversarial and diverse samples.\\n2. The zero-shot performance improvement of the trained Robin3D appears evident across various benchmarks and the ablation experiments clearly demonstrate the gains of different designs in the paper.\\n3. The writing of the article is fluent and easy to understand.\", \"weaknesses\": \"1. The related work section lacks clarity on the novelty and advantages of the RAP and IFB modules in comparison to existing studies.\\n(1) Explain how object IDs are linked to object features in previous research and discuss the benefits of wrapping these features with identical ID tokens before and after them.\\n(2) Describe how earlier studies extract and utilize 3D and 2D features, and highlight the advantages of introducing Mask3D information using RAP.\\n\\n2. How will the relative proportions of diverse and adversarial samples generated with RIG affect the performance of Robin3D? \\nPlease conduct ablation studies to examine and analyze how datasets with varying proportions of adversarial and diverse samples influence Robin3D's performance across different tasks.\\n\\n\\n3. If the dataset constructed in this paper is used to fine-tune existing task-specific or joint trained models, will it provide consistent performance gains? \\nThe authors could consider selecting 1 or 2 task-specific and jointly trained models, respectively, and tuning them on the proposed instruction-following tuning dataset to further demonstrate the contribution of this dataset to the community.\", \"questions\": \"Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduce a 3DLLM Robin3D trained on their proposed dataset generated by RIG, a pipeline to acquire diverse and discriminative data. Robin3D incorporates two key modules, the Relation-Augmented Projector (RAP) and ID-Feature Bonding (IFB), to enhance spatial understanding and object grounding. The model demonstrates state-of-the-art performance across five 3D multimodal learning benchmarks without task-specific fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The RIG engine's capability to generate adversarial and diverse instruction data significantly enhances the robustness and generalizability of 3DLLMs. The innovative proposal of adversarial data may help mitigate the hallucination tendencies of large models. The collection of diverse instructions, expanded by GPT to enrich the diversity of expressions, may alleviate the issue of rigid model outputs.\\n2. The integration of RAP and IFB modules improves the model's spatial understanding and object grounding capabilities.\\n3. Robin3D achieves superior performance across multiple benchmarks, showcasing its effectiveness and versatility.\\n4. The models are trained on the whole task data, rather than on individual tasks.\", \"weaknesses\": \"1. The module's innovativeness is found to be lacking: RAP utilizes linear layers to separately connect the 3D features from the scene, individual object 3D features, and positional information features, followed by concatenation. A possible baseline (chat-scene) employs the exact same encoders, using linear layers to connect 3D features and positional features, and then concatenating individual object 3D features. The only modification made is the interchange of inputs to the linear layers. Similarly, IFB introduces an ending ID token to signal the end of an object presentation, followed by a rearrangement of vision tokens and prompt tokens. This method of simply altering the prompt tokens is not particularly innovative.\\n\\n2. Are the results state-of-the-art (SOTA): The experimental results compared against the Chat-Scene model have been open-sourced and are being continuously updated. Prior to the submission deadline for this conference, the accuracy on the ScanRefer dataset had already surpassed 61% and 55%, outperforming the method proposed in this paper. This paper should have utilized the most recent projects and results as benchmarks; otherwise, the effectiveness of the proposed method cannot be ascertained.\\n\\n3. Model generality: Mainstream approaches about 3DLLMs typically employ a single, well-trained model to evaluate performance across multiple tasks. The joint training without task-specific fine-tuning method described in the paper does not represent a contribution unique to this work.\", \"questions\": \"1. This paper employs the same detectors and 2D&3D encoders as chat-scene (Mask3D, Uni3D, DINO-v2). What are the significant innovations of this model compared to chat-scene?\\n\\n2. In Table 1, what do the grey sections referring to \\\"ground truth question-relative objects annotations\\\" specifically indicate? Is the explicit positional information P introduced by the dedicated detector Mask3D on the ScanQA test set considered as \\\"ground truth question-relative objects annotations\\\"?\\n\\n3. Results of Baseline(+ RAP & IFB) in Table3 are the same as the benchmark results in Table 2. In the \\\"ABLATION STUDY\\\" section, it seems there might be a confusion regarding the order of incorporating modules and datasets. Benchmark(+ Adversarial & Diverse) should include RAP&IFB and encompass all datasets. Why are the results of the ablation study (+ Adversarial & Diverse) inconsistent with the results in Table 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5CHcmVzbAz
SePPO: Semi-Policy Preference Optimization for Diffusion Alignment
[ "Daoan Zhang", "Guangchen Lan", "Dong-Jun Han", "Wenlin Yao", "Xiaoman Pan", "Hongming Zhang", "Mingxiao Li", "pengcheng chen", "Dong Yu", "Christopher Brinton", "Jiebo Luo" ]
Reinforcement learning from human feedback (RLHF) methods are emerging as a way to fine-tune diffusion models (DMs) for visual generation. However, commonly used on-policy strategies are limited by the generalization capability of the reward model, while off-policy approaches require large amounts of difficult-to-obtain paired human-annotated data, particularly in visual generation tasks. To address the limitations of both on- and off-policy RLHF, we propose a preference optimization method that aligns DMs with preferences without relying on reward models or paired human-annotated data. Specifically, we introduce a Semi-Policy Preference Optimization (SePPO) method. SePPO leverages previous checkpoints as reference models while using them to generate on-policy reference samples, which replace “losing images” in preference pairs. This approach allows us to optimize using only off-policy “winning images”. Furthermore, we design a strategy for reference model selection that expands the exploration in the policy space. Notably, we do not simply treat reference samples as negative examples for learning. Instead, we design an anchor-based criterion to assess whether the reference samples are likely to be winning or losing images, allowing the model to selectively learn from the generated reference samples. This approach mitigates performance degradation caused by the uncertainty in reference sample quality. We validate SePPO across both text-to-image and text-to-video benchmarks. SePPO surpasses all previous approaches on the text-to-image benchmarks and also demonstrates outstanding performance on the text-to-video benchmarks.
[ "Reinforcement Learning", "Diffusion Model", "Image Generation", "Video Generation" ]
https://openreview.net/pdf?id=5CHcmVzbAz
https://openreview.net/forum?id=5CHcmVzbAz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kRDcXMO5Ws", "Y7QLAK7Ub1", "PY8Gc73zib", "M94Wv70YB7", "Aee3fpLEwn", "7zwPv31aKN" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_comment" ], "note_created": [ 1730695171842, 1730519667824, 1730679600262, 1730571158173, 1731658053101, 1731657639932 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8420/Reviewer_oHX4" ], [ "ICLR.cc/2025/Conference/Submission8420/Reviewer_QoUj" ], [ "ICLR.cc/2025/Conference/Submission8420/Reviewer_Yyer" ], [ "ICLR.cc/2025/Conference/Submission8420/Reviewer_3U9X" ], [ "ICLR.cc/2025/Conference/Submission8420/Authors" ], [ "ICLR.cc/2025/Conference/Submission8420/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes SePPO, a novel preference optimization method for aligning diffusion models with human preferences without requiring reward models or paired human-annotated data. The key innovations are: 1) Using previous checkpoints as reference models to generate on-policy reference samples, 2) Introducing a strategy for reference model selection that enhances policy space exploration, and 3) Developing an Anchor-based Adaptive Flipper (AAF) to assess reference sample quality. The method shows strong performance on both text-to-image and text-to-video generation tasks, outperforming previous approaches across multiple metrics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a combination of semi-policy optimization with the AAF mechanism without requiring reward models or paired human-annotated data.\\n2. Comprehensive Empirical Validation: The work provides comprehensive experimental validation across both text-to-image and text-to-video domains.\", \"weaknesses\": \"1. I find the definition of \\\"better\\\" (L277 in bold) to be confusing and the same term shows up in Theorem 4.1 seems lacking rigor. I think what the authors mean is that \\\"closer\\\" to the preferred sample $x_0^w$, but closer to $x_0^w$ does not necessarily mean better since it depends on the metric considered. Given a reward function where $r(x_0^w)>r(x_0^l)$, whether a new sample $x_1, x_2$ has a higher reward is dependent on the reward landscape, not how close it is to $x_0^w$.\\n2. Given 1, I think the main spirit of the proposed method is to fit the preferred distribution, similar to SPIN. In that sense, I am confused about why the proposed method is expected to do better since the advantage of the proposed method compared to SPIN is not clearly discussed in the paper. For example, what is missing in SPIN that the proposed method can do? \\n3. Lack of human evaluation.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a diffusion model to improve sample quality and diversity in generative tasks. The authors introduce an algorithm that integrates an Anchor-based Adaptive Flipper. To substantiate the claims, a series of comprehensive experiments, including quantitative evaluations against several state-of-the-art models and detailed ablation studies, are presented.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper employs thorough quantitative metrics such as PickScore and HPSv2. The use of ablation studies is commendable, clearly delineating the contributions of individual components of the proposed model.\\n2.\\tThe results are well-presented, with clear visualizations and comprehensive tables that facilitate an understanding of performance metrics across different models.\\n3.\\tThe proposed method is simple and easy to follow.\", \"weaknesses\": \"1.\\tThe proposed method is a bit tricky, which may limit its contribution. Therefore, I suggest the authors conduct a deeper theoretical analysis of the proposed method.\\n2.\\tThe insights from Theorem 4.1 are quite intuitive and easy to understand. I suggest the authors put Theorem 4.1 in the Appendix.\\n3.\\tThe essential reason why randomly sampling previous checkpoints as a reference model with AAF achieves the best performance is still unclear. I suggest the authors theoretically analyze the effectiveness of the proposed method, which could significantly strengthen this work. At least, the authors need to analyze in which cases, using the latest model is better than randomly sampling previous checkpoints as the reference model.\", \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes SePPO, leveraging two techniques to improve the previous SPIN-Diffusion method, including 1) randomly selecting the reference policy from all previous checkpoints and 2) A heuristic (anchor-based criterion) to determine whether a reference sample will likely win or lose. The paper performs experiments on both T2I and T2V tasks to demonstrate the effectiveness of their methods by comparing them with several different methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper performs experiments on both T2I and T2V generation tasks.\", \"The paper is easy to follow and understand.\"], \"weaknesses\": \"- **Technical contributions**: The proposed techniques involve 1) randomly selecting the reference policy from the checkpoint at iteration 0 (DITTO) and the latest iteration t - 1 (SPIN-Diffusion). 2) A heuristic (anchor-based criterion) to determine whether a reference sample will likely win or lose, and the learning loss is adjusted accordingly. Thus, the technical contributions contributions of this work are limited.\\n\\n- **Incorrect Definition of on-policy examples**: The definitions of on-policy and off-policy learning are well-defined in reinforcement learning literature [1]. Specifically, on-policy learning refers to the settings when the training examples are sampled from the current policy ($\\\\pi_\\\\theta$) being learned. However, this work treats the reference samples $\\\\mathbf{x}^{ref}\\\\_0$ sampled from $\\\\pi\\\\_\\\\{ref}$ as \\\"on-policy\\\" examples (e.g., Line 250 - 252), **which is incorrect**. In fact, both $\\\\mathbf{x}^{ref}\\\\_0$ and $\\\\mathbf{x}^w\\\\_0$ are off-policy samples since none of them is sampled from the current policy $\\\\pi_\\\\theta$.\\n\\n- **Limited Performance Improvement**: According to Tables 1 and 2, the performance improvement of the SePPO$^w$ over SPIN-Diffusion is trivial in terms of PickScore and HPSv2 score. The improvement is only obvious when evaluating with ImageReward. Additionally, SPIN-Diffusion even outperforms the proposed SePPO$^w$ by an obvious margin in terms of Aesthetic score. Therefore, I would recommend conducting a human evaluation to corroborate the results as in [2].\\n\\n- **Evaluation protocol for video generation tasks**: The metrics used in Table 3 are not meaningful enough. As the model is trained on ChronoMagic-Bench-150, I recommend reporting the results by following the evaluation protocol in Tables 3 & 4 of the ChronoMagic-Bench paper [4].\\n\\n- **Missing Citations**: Please cite the related works [2] and [3], which tackle T2I and T2V model alignment by learning from reward models.\\n\\n**Minor points**\\n1. Line 025: \\\"winning or losing images\\\" --> \\\"winning or losing examples\\\". Since the proposed method is not limited to image generation, please revise similar errors throughout the paper. \\n\\n2. I recommend not using too many subsubsections. Furthermore, avoid using unnecessary new lines when formalizing the optimization problems (e.g., Equation 5). If you are worried about the page limit, please include more qualitative examples in the main text.\\n\\n3. I suggest selecting a different abbreviation for your method. PPO [5] in is widely recognized as an algorithm focused on learning a reward function. Since your SePPO is in the self-play finetuning family and is unrelated to PPO, using this acronym may lead to confusion.\\n\\n[1] Sutton, Richard S. \\\"Reinforcement learning: An introduction.\\\" A Bradford Book (2018).\\n\\n[2] Li et al., \\\"Reward Guided Latent Consistency Distillation\\\", TMLR 2024\\n\\n[3] Li et al., \\\"T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback\\\", NeurIPS 2024\\n\\n[4] Yuan et al., \\\"ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation\\\". NeurIPS 2024\\n\\n[5] Schulman et al., \\\"Proximal Policy Optimization Algorithms\\\".\", \"questions\": \"Please refer to the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Semi-Policy Preference Optimization for fine-tuning diffusion models in visual generation, bypassing reward models and human annotations. SePPO uses past model checkpoints to generate on-policy reference samples, replacing \\u201closing\\u201d images, and focuses on optimizing only \\\"winning\\\" samples. An anchor-based criterion selectively learns from these reference samples, mitigating performance drops from uncertain quality. SePPO outperforms existing methods in text-to-image and shows strong results in text-to-video benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Practical Approach**: The SePPO method offers a practical solution for preference alignment without the need for human annotation or a reward model, which reduces significant labor costs.\\n2. **Clear Writing and Presentation**: The submission is well-written and formatted, making it easy to follow and understand.\\n3. **Effective Sample Filtering**: The Anchor-based Adaptive Flipper (AAF) criterion is a useful addition, as it helps to filter uncertain samples and enhances model robustness.\", \"weaknesses\": \"1. **Limited Literature Review and Comparison**: While preliminary experiments are presented for text-to-video models, the paper lacks a thorough literature review on this topic and has limited comparisons in the text-to-video experiments, making the evaluation seem somewhat incomplete. Improving the Related Work section would strengthen the context and position of SePPO.\\n2. **Table Readability**: Moving the annotation explanations to the table captions could improve table readability.\\n3. **Unclear Justification for Theorem 4.1**: The key of instantiating SePPO is the Theorem 4.1, however, it remains a question to me about this rationality. Specifically, if the reference model has a higher probability of generating noise compared to the current model, then in this situation, we should say this model is a better model for generating this image. We cannot assert the quality of this generated image.\", \"questions\": \"Generally speaking, methods like Diffusion-DPO requires human-annotated data pairs and SePPO does not. The upper bound of aligning model outputs using annotated data pairs should be higher than SePPO, which solely relies on the model itself. Can the authors present an explanation for this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Thanks for the valuable suggestions!\", \"comment\": \"We would like to appreciate the reviewers' efforts and the valuable suggestions. We will improve the paper and **address all issues** accordingly in the next submission.\"}" ] }
5BjQOUXq7i
RegMix: Data Mixture as Regression for Language Model Pre-training
[ "Qian Liu", "Xiaosen Zheng", "Niklas Muennighoff", "Guangtao Zeng", "Longxu Dou", "Tianyu Pang", "Jing Jiang", "Min Lin" ]
The data mixture for large language model pre-training significantly impacts performance, yet how to determine an effective mixture remains unclear. We propose RegMix to automatically identify a high-performing data mixture by formulating it as a regression task. RegMix trains many small models on diverse data mixtures, uses regression to predict performance of unseen mixtures, and applies the best predicted mixture to train a large-scale model with orders of magnitude more compute. To empirically validate RegMix, we train 512 models with 1M parameters for 1B tokens to fit the regression model and predict the best data mixture. Using this mixture we train a 1B parameter model for 25B tokens (i.e. 1000× larger and 25× longer) which we find performs best among 64 candidate 1B parameter models with other mixtures. Furthermore, RegMix consistently outperforms human selection in experiments involving models up to 7B models trained on 100B tokens, while matching or exceeding DoReMi using just 10% of the computational resources. Our experiments also show that (1) Data mixtures significantly impact performance; (2) Web corpora rather than data perceived as high-quality like Wikipedia have the strongest positive correlation with downstream performance; (3) Domains interact in complex ways often contradicting common sense, thus automatic approaches like RegMix are needed; (4) Data mixture effects transcend scaling laws. Our code is available at https://github.com/sail-sg/regmix.
[ "language model pre-training", "data mixture", "regression" ]
Accept (Spotlight)
https://openreview.net/pdf?id=5BjQOUXq7i
https://openreview.net/forum?id=5BjQOUXq7i
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zpE2b0tFPF", "y94NfYIE0u", "w8OdZCHlxL", "sgb3kqsJ8F", "kxGIfnCmPz", "kbhDl74TUN", "jbludrE3hB", "hVQiqpmqHH", "fR2cmHVyeW", "eKKoX3yREO", "d89HTahMFo", "cKGBeqmHCG", "awegECUF4G", "XOVYMqyyjA", "TaT0O2QSQN", "R5UtV5k5lA", "NyXbDNMYVe", "LMrJwAFoUG", "L6v1UN4zTd", "HZqm7E8ggY", "Gvm4ApKQsK", "BG8ucg2Sgx", "ApwWfTlS1s", "78Y6cv3Ksy", "0Tp44MacnB" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730761461464, 1732191538217, 1732516406335, 1733181855589, 1732191916765, 1732190269710, 1730687538471, 1732191093249, 1732679959332, 1737523782635, 1733040262992, 1732605926737, 1732539747094, 1729045484889, 1732191796239, 1732608827426, 1733153358932, 1733181807407, 1732190500844, 1730568503661, 1734319286222, 1732191265542, 1732191608010, 1732519095648, 1730569680280 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_YiAH" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_FGaq" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_MoX3" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_MoX3" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_wxJ7" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_uBLs" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_YiAH" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_wxJ7" ], [ "ICLR.cc/2025/Conference/Submission6642/Area_Chair_GMhU" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Authors" ], [ "ICLR.cc/2025/Conference/Submission6642/Reviewer_FGaq" ] ], "structured_content_str": [ "{\"summary\": [\"The work introduces REGMIX, a method for optimizing data mixtures to enhance language model training efficiency. REGMIX treats data mixture selection as a regression task, using small proxy models to predict the performance of different mixtures and identify the best one, enabling larger models to be trained with significantly less compute. Key findings include:\", \"REGMIX\\u2019s mixtures perform as well as or better than those selected by human experts and prior methods like DoReMi, with only a fraction of the compute cost.\", \"Data mixture has a substantial impact on downstream performance, with single-task performance differences reaching up to 14.6%.\", \"General web corpora (such as CommonCrawl) outperform traditionally high-quality data like Wikipedia in driving downstream performance.\", \"Domain interactions are complex and often counterintuitive, highlighting the value of automated approaches like REGMIX.\", \"Data mixture effects go beyond scaling laws, with REGMIX capturing the complexity by jointly optimizing across all domains.\"], \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The paper introduces a novel, regression-based approach for selecting data mixtures that reduces computational costs in language model training. The approach offers an efficient alternative to traditional dynamic or heuristic data allocation methods, making a valuable contribution to the field.\", \"The paper is technically robust and well-structured, with extensive validation across diverse data scenarios. It empirically supports the rank invariance hypothesis and uses clear, well-structured figures to illustrate the method and results, enhancing reader understanding. REGMIX\\u2019s ability to match or outperform other DoReMi and other methods with significantly less compute is a compelling outcome for LLM pre-training efficiency.\", \"The paper tackles a pertinent problem in LLM pre-training. Given the increasing size of training data and models, this approach could have a significant impact on the field, especially in reducing computational costs and environmental impact.\"], \"weaknesses\": [\"To maximize impact, the authors could highlight specific scenarios where the approach enables previously infeasible experiments due to resource constraints. Also, adding a broader discussion on trade-offs of the method (e.g., scenarios where the rank invariance assumption might not hold) would help readers assess its practical relevance and future applicability.\", \"The work could have used standardized computation metrics, such as FLOPs or GPU hours, to allow clearer comparison of the method efficiency gains relative to baselines.\"], \"questions\": \"1) Can you further explain why the choice of multiplying the e token distribution by a value from 0.1 to 5.0? Is this a standard practice? Were other ranges tested, and if so, what were the results?\\n - Also, could you discuss the rationale behind this range and whether you conducted any sensitivity analyses to determine its impact on the results?\\n\\n2) Given sufficient computation available, would segmenting domains further (into finer-grained topic-based segments) likely improve model performance or lead to more effective mixture predictions? Do you think that finer segmentation would affect the rank invariance assumption? I suggest the authors to discuss the potential impacts and challenges of a finer-grained domain segmentation in their future work section. This would help address the broader implications and limitations of their approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"> W1: The key assumption of rank invariance of data mixtures across different model sizes and token counts is not thoroughly validated. This assumption might not hold in all cases, especially with significant changes in model scale and data distribution.\\n\\n> Q1: Can the authors provide more theoretical or empirical evidence to support the rank invariance assumption? How does this assumption hold up with significant changes in model scale and data distribution?\\n\\nThank you for your insightful questions regarding the rank invariance hypothesis. We acknowledge your concerns about the assumption of rank invariance across different model sizes and token counts. To address this, we conducted extensive experiments during the rebuttal period, incorporating models with 280M parameters as well as models with 1M, 60M, and 1B parameters. Each of these models was trained using the same 64 different data mixture configurations, allowing us to systematically analyze the ranking of validation losses.\\n\\nOur findings, summarized in the table below, illustrate the strong rank correlation ($\\\\rho$) among different model and token scales:\\n\\n| Setting | 1M (1B token) | 60M (1B token) | 280M (5B token) |\\n| :---: | :---: | :---: | :---: |\\n| 1B (25B token) | 96.97% | 94.22% | 98.43% |\\n\\nThese results indicate that even with significant changes in model scale (from 1M to 1B) and token counts (from 1B tokens to 25B tokens), the rankings of data mixtures remain remarkably stable, thereby supporting the rank invariance hypothesis. We have also updated $\\\\textrm{\\\\color{blue}Appendix J}$ to provide more details and the visualization.\\n\\nWe appreciate your feedback and hope this additional evidence helps clarify the validity of our hypothesis.\\n\\n---\\n\\n> W2: The paper claims stability across different proxy model sizes, but the experiments are limited to models with up to 1B parameters. It remains unclear if the method would be equally effective for much larger models commonly used in practice (e.g., 7B or 70B parameters). If so, the additional computation cost could not be ignored.\\n\\n> W3: The authors only trained 25B tokens using the obtained data mixtures. This raises the question of whether the data scale could be enlarged to 50 times or even 100 times. And can LLM still benefit from the obtained data mixture?\\n\\nWe appreciate your concerns about the scalability of our method. To address your concerns about our current experimental setup, we propose the following additional pre-training experiments during the rebuttal period:\\n\\n* **Training a 7B model on 100B tokens** with our RegMix data mixture\\n* **Training a 7B model on 100B tokens** with the Human baseline data mixture\\n\\nThese proposed experiments will help validate the effectiveness of our data mixture approach at larger model scales and token counts. We welcome any suggestions to improve the experimental setup and provide more comprehensive evidence of our method's generalizability.\\n\\nAdditionally, we want to clarify that our stability analysis focuses on 1M and 60M proxy models, both significantly smaller than 1B parameters. The overall computational cost of all 1M model training is **approximately 2% of 1B model** training over 25B tokens, which we consider negligible.\\n \\n---\\n\\n> W4: Although the method is more efficient than some previous approaches, training 512 small models still requires substantial computational resources. This could be a limitation for teams with limited access to such resources. The trade-off between performance gains and additional costs may not always hold when the model scales up.\\n\\nWe appreciate your concern about computational resources. Despite training 512 proxy models, **the total computational cost remains remarkably low due to our ultra-small model design**. Specifically, training 512x 1M models (with just 2 layers) consumes only approximately 2% of the computational resources required for training a 1B model. This means that researchers capable of training a 1B model on 25B tokens can comfortably accommodate our proposed methodology without significant additional computational burden.\"}", "{\"title\": \"After reading the rebuttal\", \"comment\": \"Thank you for your efforts during the rebuttal phase. I appreciate how you addressed my concerns point by point. I have read your response as well as the opinions of the other reviewers. As I mentioned in my summary of strengths, I recognize the contributions made by the authors, however, I still have some reservations about how this method applies to the nature of training large language models. Based on your clarifications, I would like to raise my score accordingly.\"}", "{\"title\": \"Reply to Reviewer MoX3\", \"comment\": \"Thank you for your constructive review and encouraging words. We really appreciate it! In our final revision, we will polish the paper further to incorporate the valuable insights gained from the rebuttal discussions. Thank you!\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> W1: there are some details of the regression model that's not clearly explained. It is not clear to me how the authors fit the regression model, how many data points are used to fit the models. It would be nice to have a pseudocode or open-sourced script\\n\\nThank you for your suggestion. We have addressed this concern by including detailed pseudocode in Algorithm 1 to clarify the steps for fitting the regression model and the generation of data points. You can see the highlighted revision in $\\\\textrm{\\\\color{blue}Appendix I}$. Specifically, $N$ data points are created by training proxy models on diverse data mixtures sampled from a Dirichlet distribution. Each data point corresponds to a pair of mixture weights and the observed metric (e.g., validation loss). These data points are then used to train the regression model, which predicts the metric for unseen data mixtures. Additionally, we are committed to releasing the code, data, and models to ensure reproducibility.\\n\\n---\\n\\n> W2: the mixture weights for the baseline DoReMi are directly taken from the paper. However, it's not clear if the optimal weights learned using the DoReMi method would be different due to model and data processing differences. It's probably better to re-learn the weights using the small model in the current experiment setup.\\n\\nWe appreciate your suggestion regarding re-learning mixture weights and agree the proposed approach is more appropriate. While we made extensive efforts to reproduce the baseline methods across 17 domains using matching tokenizers and comparable proxy model sizes based on the released code, we encountered challenges in achieving consistently favorable results. The adapted data mixture from their paper, however, demonstrated strong experimental performance. To avoid potentially misleading comparisons, we decide to focus our current submission on presenting the adapted mixture results of baseline methods. Thank you for your suggestion!\\n\\n---\\n\\n> Q1: in table 2, why do you only list the MSE for 1M model but rank correlation for all model sizes? what's the MSE for the other two sizes? Why is rank correlation a better metric here?\\n\\nThanks for your detailed feedback! The MSE results of the 60M and 1B models are not included because MSE does not provide meaningful insights across different model sizes due to scale differences. Rank correlation ($\\\\rho$), on the other hand, measures the relative order of performance across data mixtures and is invariant to scale, making it a better metric for cross-scale comparisons. Moreover, $\\\\rho$ is more aligned with our assumption, the rank invariance hypothesis.\\n\\nTo clarify, we have added a more detailed explanation in the revised submission, and please check out the updated caption in $\\\\textrm{\\\\color{blue}Table 2}$.\\n\\n---\\n\\n> Q2: on line 243, the author mentioned the rank invariance hypothesis. However, it's not super clear to me what exactly this means and how this hypothesis is verified by the experimental results. Could you provide a clear definition of the rank invariance hypothesis and explicitly state how the experimental results support or verify this hypothesis?\\n\\nThanks for your question! The rank invariance hypothesis posits that the ranking of data mixtures should remain stable across various model scales (e.g., from 1M parameters to 1B parameters) and token scales (e.g., from 1B tokens to 25B tokens).\\n\\nTo clarify this concept further, we conduct additional experiments with models of 280M parameters, alongside models of 1M, 60M, and 1B parameters. Each of these four model scales was trained using the same 64 different data mixture configurations, varying in token amounts. We then compared the validation loss rankings across different data mixtures and calculated the rank correlation coefficient ($\\\\rho$) for each setting. The correlation of each setting with the actual ranks of the 1B parameter models trained on 25B tokens is summarized below:\\n\\n| Setting | 1M (1B token) | 60M (1B token) | 280M (5B token) |\\n| :---: | :---: | :---: | :---: |\\n| 1B (25B token) | 96.97% | 94.22% | 98.43% |\\n\\nThe above table demonstrates the strong correlation between the different models and token scales, supporting the rank invariance hypothesis. We have also added it to $\\\\textrm{\\\\color{blue}Appendix J}$ for full details of these experiments.\\n\\n---\\n\\n> Q3: there are some related work on data selection that falls into the group-level data selection category: https://aclanthology.org/2020.acl-main.754/\\n\\nThank you for pointing out the related work! We have cited this paper and placed it among the group-level data selection works in the revised submission. We find the method in the proposed paper very promising and see the potential for exploring multilingual balance optimization through a combination of RegMix and the approach. Thanks for pointing out!\"}", "{\"title\": \"Rebuttal to All Reviewers\", \"comment\": [\"We thank all reviewers for their constructive feedback, and we have learned a lot from the feedback and have responded to each reviewer individually. We are very excited to receive this positive feedback from all of you. Here we want to highlight some new experimental results that may interest all reviewers.\", \"**New Experiments on 100 Domains**: We have extended RegMix to 100 domains, achieving an impressive correlation of 98.80% between the predicted ranks and true ranks of data mixtures. This demonstrates the scalability of RegMix when applied to a larger number of domains.\", \"**New Experiments with 280M Models**: We have added new experiments with 280M models and systematically demonstrate the rank invariance hypothesis across four model scales (1M, 60M, 280M, 1B) and three token scales (1B tokens, 5B tokens, 25B tokens).\", \"**Ongoing Experiment - 7B Model with Optimized Data Mixture**: We are currently applying the optimized data mixture from RegMix to a 7B model trained on 100B tokens. Once the results are available, we will compare its performance against the human baseline.\", \"**Ongoing Experiment - 1B Level Proxy Models**: We are also investigating using 1B models as proxy models to generate optimized data mixtures for training 7B models. This experiment aims to provide additional insights into the selection of appropriate proxy model sizes. We will report back once the results are available.\", \"**Paper Revision**: We have made the following updates:\", \"$\\\\textrm{\\\\color{blue}Appendix A}$: describe the broader impact of our paper.\", \"$\\\\textrm{\\\\color{blue}Appendix B}$: include a detailed discussion on the limitations around the rank invariance hypothesis.\", \"$\\\\textrm{\\\\color{blue}Appendix H}$: extend RegMix to 100 domains.\", \"$\\\\textrm{\\\\color{blue}Appendix I}$: add the pseudocode for RegMix.\", \"$\\\\textrm{\\\\color{blue}Appendix J}$: provide additional empirical evidence supporting the rank invariance hypothesis.\"]}", "{\"summary\": \"The paper presents RegMix, a new method for optimizing data mixtures in pre-training large language models (LLMs). Recognizing the importance of data composition, the authors frame mixture selection as a regression task. RegMix uses small proxy models trained on various data mixtures to build a predictive model that identifies the optimal mixture for larger models.\\n\\nThe authors conduct extensive experiments to validate the approach, showing that models trained with RegMix-selected data mixtures outperform those trained with mixtures chosen by other methods, including human selection and DoReMi, while utilizing only 10% of the compute budget.\\n\\nThe authors provide insights into the effects of data mixtures, offering empirical evidence that data mixtures can significantly impact performance, with single-task performance variations reaching up to 14.6%. They also emphasize the complex interactions that occur between different data domains.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. RegMix introduces a fresh approach by framing data mixture selection as a regression problem rather than relying on complex optimizations or heuristics, making the process scalable and computationally efficient.\\n\\n2. The paper\\u2019s experimental setup is robust, with 512 small proxy models across diverse data mixtures, creating a solid regression model for data selection.\\n\\n3. The paper is well-organized, clearly explaining the methodology and experiments. It introduces the hypothesis of rank invariance in data mixtures, supported by visual aids, making the regression model\\u2019s role easy to understand.\", \"weaknesses\": \"The paper conducts a set of small-proxy models trained with small-scale tokens.\\n\\nThe paper only experiments with 1M models with 1B tokens. \\nIt is unclear how to decide the size of the proxy model parameter and training token.\", \"questions\": \"How do we decide the size for proxy model and training tokens.?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"> Q1: Can you further explain why the choice of multiplying the e token distribution by a value from 0.1 to 5.0? Is this a standard practice? Were other ranges tested, and if so, what were the results? Also, could you discuss the rationale behind this range and whether you conducted any sensitivity analyses to determine its impact on the results?\\n\\nWe appreciate your careful reading and thoughtful feedback! The range values of 0.1 to 5.0 are determined more from empirical observation rather than from standard practices, as prior works do not need random data mixtures to fit regression models. These two values are chosen primarily to ensure that the resulting random data mixtures are diverse and meaningful. Although we have not documented the use of other values officially, we conducted experiments with varying ranges previously, and these also showed similarly promising results.\\n\\nTo provide more context on the reason for selecting this particular range, a value leaning towards the lower end of the scale (0.1) creates more skewed data mixtures. Conversely, a value approaching the upper limit (5.0) constructs mixtures more aligned with the original token distributions. This choice accomplishes two principal objectives: **ensuring expansive coverage of the data mixture space** to improve the regression model's generalizability to unseen data mixtures while preserving **effective densities of meaningful information** on each training mixture. Although we do not carry out a formal sensitivity analysis, our empirical findings hint at the regression models' robustness towards variations within this range. Inspired by your suggestion, we plan to conduct a comprehensive study summarizing the impact of these hyper-parameters in our final version.\\n\\n---\\n\\n> Q2: Given sufficient computation available, would segmenting domains further (into finer-grained topic-based segments) likely improve model performance or lead to more effective mixture predictions? Do you think that finer segmentation would affect the rank invariance assumption? I suggest the authors to discuss the potential impacts and challenges of a finer-grained domain segmentation in their future work section. This would help address the broader implications and limitations of their approach.\\n\\nThank you for your insightful question! It aligns well with our original intention when designing RegMix \\u2013 optimally, we aim for it to scale up to more than 100 domains. However, our ambitions are somewhat hampered by the limited availability of a pre-training dataset with a substantial quantity of domains. As a result, our main experiments, as outlined in the main text, are conducted using only 17 domains provided by The Pile dataset.\\n\\nNonetheless, inspired by your prompt, we expand our horizons and actively conduct a more comprehensive examination of RegMix's scalability. During the rebuttal period, we do a preliminary study covering 100 domains. The selection of these domains is based on their respective base URL domains and the availability of tokens. This effort was specifically intended to highlight the adaptability of RegMix across an extended range of domains, up to as many as 100.\\n\\nIn more detail, we analyze the token availability across different URL domains using the most recent pre-training dataset, FineWeb [2]. Each URL domain is considered a separate domain, and we carry out data mixtures on these domains. We provide a sample of these domains below:\\n\\n```\\narticles.latimes.com\\nblogs.wsj.com\\nen.wikipedia.org\\neverything2.com\\nideas.repec.org\\nlatimesblogs.latimes.com\\nnews.bbc.co.uk\\n...\\n```\\nTo evaluate the effectiveness of RegMix, we train 1,000 proxy models of 1M parameters, fit them with regression models, and examine the rank correlation between the rankings predicted by the regression model and the actual rankings of 64 unseen data mixtures on these domains. Taking the validation loss on the `en.wikipedia.org` domain as a demonstration, and following our main experiments validating on both 1M and 60M models, RegMix delivers the subsequent regression performance across these 100 domains:\\n\\n|| 1M ($\\\\rho$ $\\\\uparrow$)|1M (MSE $\\\\downarrow$)|60M ($\\\\rho$ $\\\\uparrow$)|\\n|----|----|----|----|\\n|Linear| 90.33| 0.12| 88.64|\\n|LightGBM| 99.53| 0.02| 98.80|\\n\\nAs shown above, consistent with our findings on The Pile dataset (17 domains), RegMix demonstrates a commendable performance even when scaled up to 100 domains. With LightGBM regression, it continues to achieve a high correlation on unseen data mixtures for both 1M models (correlation of 99.53%) and 60M models (correlation of 98.80%). These results indicate the applicability of RegMix over a wide array of domains.\\n\\nWe have also updated the table above along with a visualization in $\\\\textrm{\\\\color{blue}Appendix H}$ in the revised submission.\\n\\n[1] Muennighoff et al., \\\"Scaling Data-Constraint Language Models\\\", NeurIPS 2023.\\n\\n[2] Penedo et al, \\\"The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale\\\", 2024.\"}", "{\"title\": \"Comment from Authors\", \"comment\": \"Dear Reviewers,\\n\\nSince the rebuttal period has been extended to December 2, we are excited to have another week to welcome any additional questions or clarifications you may have regarding our submission. Based on the previous round of reviews, we have made significant improvements to RegMix:\\n\\n- Expanding the experiments to 1,000 domains for RegMix and verify its effectiveness\\n- Verifying performance improvement on 7B-level models over 100B tokens\\n- Systematically verifying the rank invariance hypothesis on model scales and more token counts\\n- Investigating proxy model sizes at the 1B level with new insights\\n\\nWe eagerly welcome the opportunity to further discuss the potential of RegMix. If you have any specific inquiries or require additional information, please do not hesitate to share them. We are sincerely grateful to all reviewers for their constructive, helpful, and timely feedback.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thank you for your detailed responses on the questions.\\nI am satisfied with the detailed explanation to address my concerns. I would like to support the work to be accepted.\"}", "{\"comment\": \"Thank you for your detailed responses and extensive experiments.\\n\\nIt is great to see the empirical justifications for the method's capability to handle as many as 1000 domains, which is much closer to real-world settings to my knowledge. \\n\\nThe authors may misunderstand my concerns about the rank invariance assumption (W3) but I see similar concerns from other reviewers and the authors supplement experiments on more model scales. This can somehow support the authors' claim while I am still confused given the contradicting opinions from different literature. I tend to believe that rank invariance holds under some preconditions, which may hold true in most cases. Or the resolution of loss is too small to faithfully reflect the rankings, which requires us to make predictions on downstream tasks.\\n\\nOverall, I appreciate the authors' excellent work and would like to support its acceptance.\"}", "{\"title\": \"More Experimental Results by Authors\", \"comment\": \"Hi all reviewers, thank you once again for your valuable comments and suggestions, which have been very helpful for us! We understand that this is a particularly busy time, and we greatly appreciate it if you could take a moment to review our additional experiments and response and let us know if they adequately address your concerns. Should there be any additional feedback, we will do our best to incorporate it in the next few days.\\n\\nAs previously committed, we have completed the experiments and are now reporting our findings:\\n- $\\\\textrm{\\\\color{blue}Appendix K}$: Comparing RegMix and Human on a 7B model over 100B tokens, RegMix continues to outperform Human by a significant margin, with the ability to accelerate performance by up to 75% on some benchmarks.\\n- $\\\\textrm{\\\\color{blue}Appendix L}$: Investigating whether it could bring more gains by training large proxy models (e.g., 1B). The conclusion is that our 1M proxy model is already strong enough, so we recommend researchers to begin with the ultra small models for data mixture optimization.\\n\\n## Appendix K: Scaling RegMix to 7B Models over 100B Tokens\\n\\nIn order to further validate the effectiveness of our method on larger models, we conducted an experiment using Human baseline and RegMix data mixtures on a 7B model trained on 100B tokens. The results, summarized in ${\\\\textrm{\\\\color{blue}Table 10}}$ (see below), demonstrate that RegMix still outperforms the Human baseline, achieving an average performance boost of 2%:\\n\\n| Benchmark | Human | Our Method |\\n| --- | --- | --- |\\n| Social IQA | 41.6 | 43.3 |\\n| HellaSwag | 55.0 | 63.3 |\\n| PiQA | 72.4 | 75.3 |\\n| OpenBookQA | 34.1 | 36.7 |\\n| Lambada | 44.9 | 51.0 |\\n| SciQ | 91.7 | 91.2 |\\n| ARC Easy | 63.5 | 65.4 |\\n| COPA | 75.0 | 80.1 |\\n| RACE | 35.1 | 36.6 |\\n| LogiQA | 25.7 | 24.1 |\\n| QQP | 58.9 | 56.1 |\\n| WinoGrande | 58.6 | 60.7 |\\n| MultiRC | 52.0 | 51.0 |\\n| **Average** | **54.5** | **56.5 (+2.0)** |\\n\\nTo provide a more detailed illustration, we benchmarked the downstream performance of RegMix and Human on every dataset at intervals of 25B tokens in $\\\\textrm{\\\\color{blue}Figure 16}$ of $\\\\textrm{\\\\color{blue}Appendix K}$. The results show that RegMix can significantly speed up pre-training, with a **50% acceleration on most benchmarks (e.g., HellaSwag)** and **up to 75% on some benchmarks (e.g., PiQA)**. Notably, the performance boost does not decrease with the amount of training tokens. However, we also observed that both RegMix and Human struggle to improve on certain benchmarks (e.g., MultiRC), even with increased token amounts. These findings suggest that RegMix mainly benefits downstream tasks whose performance increases with the amount of training data, but may not improve tasks that do not follow scaling laws. This observation is intriguing and worthy for further investigation.\\n\\n---\\n\\n## Appendix L: Using Larger Proxy Model\\n\\nWe conducted a preliminary study on the impact of proxy model size on effectiveness. Specifically, we compared two configurations: (1) 128 proxy models of 1B parameters each, and (2) 512 proxy models of 1M parameters each (the setting used in our main experiments). Both configurations used 1B training tokens per proxy model. We limited our investigation to these configurations due to computational constraints that prevented us from exploring scenarios with more 1B-parameter proxy models.\\n\\nTo evaluate these configurations, we used their respective optimized data mixtures to train two 7B models on 100B tokens and compared their performance. The results, summarized in $\\\\textrm{\\\\color{blue}Table 11}$ (see below), show that both proxy settings achieved similar average performance across downstream tasks. This suggests that increasing proxy model size, even with fewer proxy models, can maintain competitive performance. However, given that the 1B proxy models do not significantly outperform the 1M proxy models, and considering that they incur over much more computational overhead, we recommend prioritizing a larger number of smaller proxy models over fewer larger ones. Based on our findings, we suggest practitioners begin with ultra-small proxy models (e.g., 1M parameters in our setting) as a starting point to optimize data mixtures for language model pre-training.\\n\\n| **Benchmark** | **RegMix** (1B as proxy) | **RegMix** (1M as proxy) |\\n|--------------|------------------------------|------------------------------|\\n| Social IQA | 43.4 | 43.3 |\\n| HellaSwag | 62.9 | 63.3 |\\n| PiQA | 75.1 | 75.3 |\\n| OpenBookQA | 36.2 | 36.7 |\\n| Lambada | 50.0 | 51.0 |\\n| SciQ | 91.2 | 91.2 |\\n| ARC Easy | 65.9 | 65.4 |\\n| COPA | 79.6 | 80.1 |\\n| RACE | 35.6 | 36.6 |\\n| LogiQA | 23.9 | 24.1 |\\n| QQP | 56.6 | 56.1 |\\n| WinoGrande | 60.7 | 60.7 |\\n| MultiRC | 51.6 | 51.0 |\\n| **Average** | **56.4** | **56.5** |\"}", "{\"summary\": \"This paper proposes a method that trains a simple regression model over data mixture ratios and the LLM loss on very small models, and then use the trained regressor to predict the best data mixture configuration for training larger scale models. The method is interesting and the experiments are relatively comprehensive. The paper considers two regression methods and verified the predicted mixture on a 1B model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. the proposed method is novel and has practical impact to LLM training\\n2. the authors evaluated the predicted mixture on 1B model and compared to a few prior methods based on the downstream performance of the model\\n3. the paper also has good analysis and insights about regarding optimizing data mixture ratio, the relationship between validation PPL loss and downstream task performance, and the issues regarding scaling laws for data mixture.\", \"weaknesses\": \"1. there are some details of the regression model that's not clearly explained. It is not clear to me how the authors fit the regression model, how many data points are used to fit the models. It would be nice to have a pseudocode or open-sourced script\\n2. the mixture weights for the baseline DoReMi is directly taken from the paper. However, it's not clear if the optimal weights learned using the DoReMi method would be different due to model and data processing differences. It's probably better to re-learn the weights using the small model in the current experiment setup.\", \"questions\": \"1. in table 2, why do you only list the MSE for 1M model but rank correlation for all model sizes? what's the MSE for the other two sizes? Why is rank correlation a better metric here?\\n2. on line 243, the author mentioned the rank invariance hypothesis. However, it's not super clear to me what exactly this means and how this hypothesis is verified by the experimental results. Could you provide a clear definition of the rank invariance hypothesis and explicitly state how the experimental results support or verify this hypothesis?\\n3. there are some related work on data selection that falls into the group-level data selection category: https://aclanthology.org/2020.acl-main.754/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"> W1: The feasibility of treating the data mixing problem as a regression problem has been an idea unveiled by previous studies [1,2], as the authors also mentioned in the paper.\\n\\nThank you for your comment. We very much like and respect the previous work like data mixing laws (DML), and please allow me to respectfully describe their differences. DML seeks analytical scaling functions to describe data mixing effects, but our RegMix approach directly optimizes target metrics through regression models. We believe regression models offer a more flexible and nuanced approach to understanding data mixture performance, allowing us to explore non-linear relationships and interactions that DML might be hard to model.\\n\\n---\\n\\n> W2: The methods experiment on as many as 512 training runs. It is unclear whether the regression step is still necessary with so many experimented mixtures. This makes the proposed method actually a grid search with a small-scale proxy.\\n\\nThank you for this important question. While 512 training runs may seem substantial, it's actually far more efficient than grid search, and here's why:\\n\\nOur work spans 17 domains from the Pile, which significantly increases the complexity of data mixture optimization. Even with a simplified approach of allocating 10 units of 0.1 across these 17 domains (maintaining a sum of 1), the computational demands remain considerable.\\n\\nThis scenario represents a \\\"stars and bars\\\" problem in combinatorics - specifically, a combination problem with replacement. The total number of possible combinations can be calculated as C(n + k - 1, k), where n = 17 (domains) and k = 10 (units), yielding 5,311,735 combinations.\\n\\nWhile existing methods, like those in the Data Mixing Law paper, can use token availability to constrain the search space, the resulting space remains vast. Our approach, requiring only 512 runs, achieves feasible regression accuracy while reducing computational requirements by several orders of magnitude compared to traditional grid search, which would demand millions of training runs.\\n\\nAdditionally, in response to the suggestion of Reviewer `YiAH`, we have extended RegMix to encompass 100 domains (please refer to our detailed response to Reviewer `YiAH` for more information). In this setting, we utilize 1,000 proxy models (1M parameters each) to achieve a correlation of 98.8% on the unseen data mixtures of 60M parameter models. We hope this addresses your concerns regarding the effectiveness of regression models using this challenging case.\\n\\n---\\n\\n> W3: The proposed method highly depends on the assumption of ranking invariance regarding scales. The author only provides limited empirical results on this assumption. \\nHowever, such an assumption is questionable according to [3]. \\nIt would be better if the authors provided more discussion to explain the scope where this assumption holds.\\n\\nWe completely agree with your point. As noted in $\\\\textrm{\\\\color{blue}Appendix B}$, many existing data mixing methods assume the availability of unlimited data for each domain, struggling with realistic scenarios. In contrast, RegMix can effectively manage token availability by adjusting the simulation space, particularly leveraging the 4-epoch practice introduced by Muennighoff et al. 2023 [1]. For instance, if we can afford to repeat HackerNews for 4 epochs and its token count constitutes 3% of the total expected training tokens, we can set its maximum domain weight to 12% during simulation. This approach allows RegMix to efficiently handle data mixture according to the available computational budget, in line with the findings of Goyal et al. 2024 [2]. Furthermore, exploring the integration of our method with the decay coefficient of data reuse proposed by Muennighoff et al. [1] could be an intriguing avenue for future research.\\n\\n[1] Muennighoff et al. \\\"Scaling Data-Constrained Language Models\\\", NeurIPS 2023.\\n\\n[2] Goyal et al. \\\"Scaling Laws for Data Filtering--Data Curation cannot be Compute Agnostic.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024.\"}", "{\"title\": \"Thanks for your support\", \"comment\": \"Thank you for sharing your thoughtful feedback on our work. We greatly appreciate your input for our work! We are particularly encouraged by your positive assessment of the scalability of RegMix with 1000 domains. As you note, this better reflects real-world deployment scenarios.\\n\\nWhile our experiments across model scales provide empirical support for the rank invariance hypothesis, we acknowledge there is still active discussion in the literature about it, and the conclusion is not decided yet. Your perspective that it may hold under certain preconditions is insightful. The suggestion about potential resolution limitations of losses is also well-taken.\\n\\nWe will continue investigating the theoretical foundations and practical implications of RegMix and its hypothesis in future work. Thank you again for the constructive review that helped strengthen our paper.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to authors\", \"comment\": \"I acknowledge the authors response, I thank them for the efforts in addressing the issues pointed out. I also acknowledge the authors response to other reviewers, which addressed other relevant issues. I believe the modifications made contribute to make the paper even more stronger. I keep my score, I believe the paper should be accepted at the main conference.\"}", "{\"title\": \"Reply to Reviewer YiAH\", \"comment\": \"Thank you for your thoughtful and constructive review! We deeply appreciate the valuable insights and feedback provided during the rebuttal discussions. In our final revision, we will carefully refine the paper, ensuring that we fully incorporate the suggestions and address the key points raised. Thank you again!\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"> W1: To maximize impact, the authors could highlight specific scenarios where the approach enables previously infeasible experiments due to resource constraints. Also, adding a broader discussion on trade-offs of the method (e.g., scenarios where the rank invariance assumption might not hold) would help readers assess its practical relevance and future applicability.\\n\\n\\nWe appreciate your thoughtful feedback on highlighting the practical implications of RegMix. We think a unique contribution of RegMix is its novel use of **ultra-small** proxy models (i.e., 1M parameters) to optimize data mixtures for language model pre-training, an approach previously unexplored in the field. This novel use of ultra-small proxy models reduces computational overhead to less than 2% of final training costs, dramatically lowering barriers for data mixture research within academic budgets. Through the focus on computational efficiency and our commitment to open science (with all datasets and trained models publicly available), we believe RegMix represents a significant advancement toward democratizing research in language model pre-training.\\n\\nBeyond the methodology contribution, our work delivers several novel empirical insights into data mixture research. We provide the first comprehensive demonstration of significant performance variations across different data mixtures, supported by extensive experiments with 1B-parameter models trained on 64 distinct mixtures and rigorously evaluated across 12 benchmarks. Our results establish the superiority of automatic data mixture optimization over human intuition-based approaches, with PhilPapers serving as an interesting in-depth case study illustrating how domain interactions follow complex patterns that transcend human intuition. We have included these explanations in $\\\\textrm{\\\\color{blue}Appendix A}$ in the revised submission to maximize the impact of RegMix, and we greatly appreciate your insightful suggestion!\\n\\nRegarding limitations and future work (detailed in $\\\\textrm{\\\\color{blue}Appendix B}$), we acknowledge that our investigation of the rank invariance assumption currently focuses on model scales from 1M to 1B parameters. While we aimed to verify the hypothesis at larger scales, establishing statistically meaningful correlations for 3B models would require training 64 different models with 50B tokens each, equivalent to training one 3B model on 3.2T tokens, which significantly exceeds our computational resources. \\n\\nFollowing the valuable suggestion of Reviewer `Fgaq`, we are actively conducting experiments with 7B models trained on 100B tokens to compare RegMix against the Human baseline during this rebuttal period. These experiments will provide crucial insights into the scalability of our conclusions to larger models trained on substantially more tokens, and we will share these experimental results as soon as they become available.\\n\\nAnother notable consideration is the token availability challenge. Like most existing data mixing methods, we currently assume unlimited domain data availability. While token availability and data mixture can be controlled in our simulations, we recognize that systematically incorporating real-world data availability constraints presents important theoretical and practical challenges. We see promising directions for future work in exploring data reuse decay coefficients, as introduced in [1], to address limited data scenarios, potentially extending RegMix to broader scenarios.\\n\\n---\\n\\n> W2: The work could have used standardized computation metrics, such as FLOPs or GPU hours, to allow a clearer comparison of the method efficiency gains relative to baselines.\\n\\nWe appreciate your constructive feedback regarding computational metrics. To address your suggestion, we would like to highlight that we have reported the estimated FLOPs in $\\\\textrm{\\\\color{blue}Table 4}$, which provides insights into the computational overhead of different methods. To make it clear, we copy the numbers of DoReMi and RegMix from $\\\\textrm{\\\\color{blue}Table 4}$ here:\\n\\n| Method | Average Performance | FLOPs |\\n| :---: | :---: | :---: |\\n| DoReMi | 46.8 | 3.7e19 |\\n| RegMix | 47.3 | 3.5e18 (\\u219390%) |\\n\\nAs demonstrated above, RegMix achieves a significant 90% reduction in FLOPs compared to the DoReMi baseline, while simultaneously improving average performance. In the revised manuscript, we will further emphasize these computational efficiency gains to provide a more comprehensive evaluation of all approaches.\"}", "{\"summary\": \"This paper formulates data mixing problems as a regression task. The authors propose to search data mixtures on small models and fit regression models to predict the optimal data mixture, which is then transferred to larger-scale model training. The authors empirically show the rankings of different data mixtures hold consistent between small and large-scale training. And data mixture found to be optimal at small scales can lead to improved performance compared to human heuristics and previous methods at large scales.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The author identifies a helpful assumption, namely ranking invariance regarding training scales, which helps reduce the cost of tuning data mixtures in this paper.\\n3. The proposed method is simple to implement and efficient, thus appealing to try in practice.\\n4. The paper contains extensive experiments to show optimizing data mixtures improves model performance.\", \"weaknesses\": \"1. The feasibility of treating the data mixing problem as a regression problem has been an idea unveiled by previous studies [1,2], as the authors also mentioned in the paper.\\n2. The methods experiment on as many as 512 training runs. It is unclear whether the regression step is still necessary with so many experimented mixtures. This makes the proposed method actually a grid search with a small-scale proxy.\\n3. The proposed method highly depends on the assumption of ranking invariance regarding scales. The author only provides limited empirical results on this assumption. However, such an assumption is questionable according to [3]. It would be better if the authors provide more discussion to explain the scope where this assumption holds.\\n\\n[1] Data Mixing Made Efficient: A Bivariate Scaling Law for Language Model Pretraining\\n\\n[2] Data Mixing Laws: Optimizing Data Mixtures by Predicting Language Modeling Performance\\n\\n[3] Scaling Laws for Data Filtering\\u2014 Data Curation cannot be Compute Agnostic\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"This paper introduces REGMIX, a novel method for optimizing data mixtures to improve the efficiency of pre-training LLMs. The method frames data mixture selection as a regression task, using small proxy models trained on diverse data mixtures to predict the best-performing mixture for larger-scale training. REGMIX presents a significant advance in automating data mixture selection, reducing computational costs while achieving or surpassing the performance of prior methods and human expertise. Detailed key findings and contributions include:\", \"Methodology: REGMIX uses small models to evaluate various data mixtures, fits a regression model to predict their performance, and identifies the optimal mixture for larger-scale training.\", \"Efficiency and Effectiveness (showed empirically): REGMIX-selected mixtures perform as well as or better than mixtures chosen by human experts or prior methods like DoReMi while using only 10% of the compute budget. The method also demonstrates improved downstream task performance, with single-task performance differences of up to 14%. These findings were obtained with comprehensive experiments, including testing with a 1B parameter model, and comparing two regression methods for modeling data mixtures.\", \"Key Insights: Data composition has a substantial impact on LLM performance, often in counterintuitive ways, emphasizing the importance of automated approaches like REGMIX. Authors also found general web corpora (e.g., CommonCrawl) outperform traditionally high-quality datasets like Wikipedia for downstream performance, and REGMIX can capture complex interactions across data domains, revealing insights that go beyond existing scaling laws.\", \"Strength of this paper\", \"Novel and Effective Approach: The paper introduces a novel regression-based method, REGMIX, for optimizing data mixtures in LLM pre-training, reducing computational costs while maintaining or surpassing performance compared to methods like DoReMi and human selection. By framing data mixture selection as a regression task, the method avoids complex optimization or heuristic approaches, making it computationally efficient and scalable.\", \"Experimental Rigor: Extensive experiments to cover diverse data scenarios and validate the method, including training 512 small proxy models on diverse data mixtures and testing with a 1B parameter model trained on 25B tokens. The authors empirically support the rank invariance hypothesis, demonstrating consistency in mixture rankings between small and large models.\", \"Scalability and Practicality: The method enables parallel training of small proxy models, significantly reducing the time and compute required compared to traditional approaches. It is simple to implement and efficient, making it highly practical for real-world applications.\", \"Impact and Relevance: The approach addresses a critical problem in LLM pre-training, offering a solution that reduces computational and environmental costs, which is increasingly important as models and datasets grow larger.\", \"The paper is well-written, clearly explaining the methodology, experiments, and findings.\", \"Weakness of this paper\", \"Several reviewers raised few concerns/limitations of this paper. By addressing these limitations, the paper could strengthen its experiment and expand impact.\"], \"weaknesses_of_the_paper\": [\"Experimental scale and settings: The experiments are limited to models with 1M parameters trained on 1B tokens and validated with 1B parameter models trained on 25B tokens. The scalability to much larger datasets or model sizes is untested. It is also unclear how to determine the optimal size of the proxy model parameters or the training tokens, which limits the reproducibility and generalization of the method. The paper does not use standardized computation metrics like FLOPs or GPU hours, making it harder to quantify efficiency gains relative to baselines.\", \"Impact, applicability, and reproducibility: The authors could better emphasize specific scenarios where the approach enables previously infeasible experiments due to resource constraints, or discuss on trade-offs, such as when the rank invariance assumption may not hold, to help readers better understand the method's limitations and future applicability. Key details about the regression model, such as the number of data points used for fitting, are not clearly explained. Adding pseudocode or open-sourced scripts would improve clarity and reproducibility.\", \"Baseline Comparisons: The mixture weights for DoReMi were directly taken from the original paper, but differences in model or data processing could affect the results. Re-learning these weights in the current setup would provide a fairer comparison.\"], \"additional_comments_on_reviewer_discussion\": \"Above summarized the strength and weaknesses raised by reviewers. Most of the weaknesses were addressed via further discussion and more experiment results. Given the relatively positive ratings, the strengthens summarized above, and mitigated concern on weaknesses, I recommend to accept this paper.\"}", "{\"comment\": \"> W1: The paper conducts a set of small-proxy models trained with small-scale tokens.\\n\\nThank you for the observation! The decision to employ small-proxy models alongside small-scale tokens is primarily driven by a desire to maintain a balance between experimental feasibility and computational efficiency. Proxy models enable us to efficiently explore a diverse array of data mixtures, yielding predictions that exhibit strong generalizability to larger-scale models, as verified by our experiments.\\n\\nAlthough our approach encompasses a suite of small proxy models, the computation overhead is relatively small since our method leverages **ultra-small proxy models** (for instance, the 1M model). Therefore, the computational overhead attributed to the training of proxy models remains marginal, comprising just 2% of the total computation required for the final training of one 1B model on 25B tokens. We believe this is not just feasible, but also highly efficient, particularly when compared to prior works that necessitate significantly higher computational resources for proxy models.\\n\\n---\\n\\n> W2: The paper only experiments with 1M models with 1B tokens. It is unclear how to decide the size of the proxy model parameter and training token.\\n> Q3: How do we decide the size of the proxy model and training tokens?\\n\\nThanks for your insightful question! We select a 1M model with 1B tokens as the proxy model, rooted in a strategic methodology for exploring data mixture optimization **with minimal computational resources**. The 1M model represents the minimal proxy model size (which has never been explored by previous works), with just 2 transformer layers, serving as an extremely efficient proxy that allows us to explore diverse data mixtures quickly. By successfully applying RegMix to this ultra-small model, we can gain confidence in the potential scalability of our approach to larger model sizes, providing valuable insights without requiring extensive computational investments.\\n\\nRecognizing the importance of comprehensive exploration, we also experimented with 60M model proxy models alongside the 1M model, as demonstrated in $\\\\textrm{\\\\color{blue}Figure 13}$ of $\\\\textrm{\\\\color{blue}Appendix F}$. The consistent results of the derived optimized data mixture across these different model scales help validate the stability of our approach. Note that as illustrated in $\\\\textrm{\\\\color{blue}Figure 3}$, increasing the number of training runs can significantly improve the performance of regression models, but not necessarily the tokens. Therefore, we believe using 1M models with 1B tokens proves sufficient for proxy training.\\n\\n**Back to your question, the decision for the proxy model and training tokens should be a trade-off between computational efficiency and final performance**. Our experiments indicate that, even with a 1000x smaller model and 25x fewer tokens, we can still derive a high-performing data mixture superior to human and previous works. In practice, we recommend first scaling up the number of proxy models (i.e., the number of different data mixtures), instead of increasing the proxy model size and tokens. With the default setting of a 1M model with 1B tokens, we think they should be fine for large-scale experiments. We acknowledge that optimal data mixture may have minor variance between small and large models, but our goal is to establish a practical method that can generate high-performing data mixtures within a feasible computation budget. We will add these explanations to the final version, thanks for your question!\", \"title\": \"Rebuttal by Authors\"}", "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"> Q2: How does REGMIX perform with proxy models larger than 1B parameters? Can the authors provide any preliminary results or insights on this? Or could the obtained data mixtures guide us to train a better model using many more tokens, e.g., 100B?\\n\\nThanks for your suggestion! We never thought of using 1B parameters as proxy model sizes, but we would love to conduct the experiments during the rebuttal to answer your question! We propose to explore 1B parameter proxy models by training 128x 1B models with 1B tokens (the most computation cost we can afford now) and comparing their optimized data mixture performance against our current best data mixture. Specifically, we will evaluate RegMix (1M) and RegMix (1B) in the context of training 7B models on 100B tokens during the rebuttal period, providing new insights into the scalability of our data mixture optimization approach across different model sizes. Please stay tuned for the results!\\n\\n> Q3: Can the authors provide a detailed comparison of the computational resources required by REGMIX and other methods? This would help in understanding the practical feasibility of the method.\\n\\nThanks for your suggestion! To make it clear, we have previously reported the estimated FLOPs in $\\\\textrm{\\\\color{blue}Table 4}$, which indicates the extra computation cost of different methods. We will highlight the extra computation cost part in the final version. To make it clear for you, we copy the numbers of DoReMi (the baseline) and RegMix below:\\n\\n| Method | Average Performance | FLOPs |\\n| :---: | :---: | :---: |\\n| DoReMi | 46.8 | 3.7e19 |\\n| RegMix | 47.3 | 3.5e18 (\\u219390%) |\"}", "{\"title\": \"Thank you for your feedback and we have 7B results now!\", \"comment\": \"Thank you very much for your response and for adjusting your score accordingly, and we are also excited to share our latest findings! As mentioned previously, we have ongoing experiments with 7B models, and we are pleased to report that **RegMix still outperforms the Human baseline significantly on 7B models trained on 100B tokens**!\\n\\nTo recap, in order to further validate the effectiveness of our method on larger models, we conducted an experiment using Human baseline and RegMix data mixtures on a 7B model trained on 100B tokens. The results, summarized in the following table ( ${\\\\textrm{\\\\color{blue}Table 10}}$ in the paper), demonstrate that RegMix still outperforms the Human baseline, achieving an average performance boost of 2%:\\n\\n| Benchmark | Human | Our Method |\\n| --- | --- | --- |\\n| Social IQA | 41.6 | 43.3 |\\n| HellaSwag | 55.0 | 63.3 |\\n| PiQA | 72.4 | 75.3 |\\n| OpenBookQA | 34.1 | 36.7 |\\n| Lambada | 44.9 | 51.0 |\\n| SciQ | 91.7 | 91.2 |\\n| ARC Easy | 63.5 | 65.4 |\\n| COPA | 75.0 | 80.1 |\\n| RACE | 35.1 | 36.6 |\\n| LogiQA | 25.7 | 24.1 |\\n| QQP | 58.9 | 56.1 |\\n| WinoGrande | 58.6 | 60.7 |\\n| MultiRC | 52.0 | 51.0 |\\n| **Average** | **54.5** | **56.5 (+2.0)** |\\n\\nTo provide a more detailed illustration, we benchmarked the downstream performance of RegMix and Human on every dataset at intervals of 25B tokens in $\\\\textrm{\\\\color{blue}Figure 16}$ of $\\\\textrm{\\\\color{blue}Appendix K}$. The results show that RegMix can significantly speed up pre-training, with a **50% acceleration on most benchmarks (e.g., HellaSwag)** and **up to 75% on some benchmarks (e.g., PiQA)**. Notably, the performance boost does not decrease with the amount of training tokens. However, we also observed that both RegMix and Human struggle to improve on certain benchmarks (e.g., MultiRC), even with increased token amounts. These findings suggest that **RegMix mainly benefits downstream tasks whose performance increases with the amount of training data, but may not improve tasks that do not follow scaling laws**. This observation is intriguing and worthy for further investigation.\\n\\nWe would like to thank you for your encouraging feedback, which motivates us to explore whether RegMix works well on larger model sizes. Your feedback also raises an interesting insight: RegMix mainly benefits tasks that are aligned with scaling laws (i.e., performance increases with the amount of tokens). We hope these results will increase your confidence in the effectiveness of RegMix. Thank you again for your time and effort during the review!\"}", "{\"summary\": \"The paper proposes a method called REGMIX for automatically selecting an effective data mixture to optimize the pre-training of large language models. REGMIX formulates the data mixture selection as a regression task, training a set of small models with diverse data mixtures and fitting a regression model to predict their performance. The fitted regression model is then used to simulate and identify the top-performing data mixture, which is subsequently used to train a large-scale model. The empirical results demonstrate that REGMIX can improve downstream task performance and achieves results comparable to or surpassing the DoReMi method while using only 10% of the compute budget.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a novel method, REGMIX, which formulates the data mixture selection problem as a regression task. This is a creative approach that leverages small-scale proxy models to predict optimal data mixtures for large-scale models.\", \"The authors conducted extensive experiments, training 512 models with 1M parameters on 1B tokens to fit the regression model. They then validated this model by training a 1B parameter model on 25B tokens, showing superior performance compared to human selection and the DoReMi method.\", \"The method allows for parallel training of small proxy models, making it more scalable than previous approaches that require training a single model for a long time.\", \"The paper provides several interesting findings, such as the significant impact of data mixtures on performance, the strong positive correlation of web corpora with downstream performance, and the complex interactions between domains.\"], \"weaknesses\": [\"The key assumption of rank invariance of data mixtures across different model sizes and token counts is not thoroughly validated. This assumption might not hold in all cases, especially with significant changes in model scale and data distribution.\", \"The paper claims stability across different proxy model sizes, but the experiments are limited to models with up to 1B parameters. It remains unclear if the method would be equally effective for much larger models commonly used in practice (e.g., 7B or 70B parameters). If so, the additional computation cost could not be ignored.\", \"The authors only trained 25B tokens using the obtained data mixtures. This raises the question of whether the data scale could be enlarged to 50 times or even 100 times. And can LLM sitll benefit from the obtained data mixture?\", \"Although the method is more efficient than some previous approaches, training 512 small models still requires substantial computational resources. This could be a limitation for teams with limited access to such resources. The trade-off between performance gains and additional costs may not always hold when the model scales up.\"], \"questions\": [\"Can the authors provide more theoretical or empirical evidence to support the rank invariance assumption? How does this assumption hold up with significant changes in model scale and data distribution?\", \"How does REGMIX perform with proxy models larger than 1B parameters? Can the authors provide any preliminary results or insights on this? Or could the obtained data mixtures guide us to train a better model using much more tokens, e.g., 100B?\", \"Can the authors provide a detailed comparison of the computational resources required by REGMIX and other methods? This would help in understanding the practical feasibility of the method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5BXWhVbHAK
Can One Modality Model Synergize Training of Other Modality Models?
[ "Jae-Jun Lee", "Sung Whan Yoon" ]
Learning with multiple modalities has recently demonstrated significant gains in many domains by maximizing the shared information across modalities. However, the current approaches strongly rely on high-quality paired datasets, which allow co-training from the paired labels from different modalities. In this context, we raise a pivotal question: Can a model with one modality synergize the training of other models with the different modalities, even without the paired multimodal labels? Our answer is 'Yes'. As a figurative description, we argue that a writer, i.e., a language model, can promote the training of a painter, i.e., a visual model, even without the paired ground truth of text and image. We theoretically argue that a superior representation can be achieved by the synergy between two different modalities without paired supervision. As proofs of concept, we broadly confirm the considerable performance gains from the synergy among visual, language, and audio models. From a theoretical viewpoint, we first establish a mathematical foundation of the synergy between two different modality models, where each one is trained with its own modality. From a practical viewpoint, our work aims to broaden the scope of multimodal learning to encompass the synergistic usage of single-modality models, relieving a strong limitation of paired supervision. The code is available at https://github.com/johnjaejunlee95/synergistic-multimodal.
[ "Multimodal learning", "Representation learning", "learning theory" ]
Accept (Poster)
https://openreview.net/pdf?id=5BXWhVbHAK
https://openreview.net/forum?id=5BXWhVbHAK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xoDMLV5XUY", "vZfS46zo3q", "ntBRQd1lc6", "kd76bMhd0A", "glPUqqDzc3", "eUqPkz036n", "eGBYhvPFXM", "auQmH3zigp", "ZzSF2tn2hA", "W8cGKbRsHt", "UN4sANGPrd", "RUn8k6FtWk", "PjdmNbEduK", "NdIxVac62t", "I7ungArz2j", "EUgKLtak2x", "Djf0oLPJLp", "BIIWep4d6c", "9GjyVHh0ns", "7FCXzK3fRx", "3dy1ChC5p9", "2GZwQTagml" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523497289, 1732283565920, 1732459322503, 1732265222860, 1732459479153, 1732285812203, 1732264083727, 1732590133684, 1733119043655, 1732287769784, 1732266566645, 1732266000899, 1730465033624, 1732700845309, 1732573389713, 1732701153895, 1732266333631, 1732264955089, 1734675807087, 1732587360379, 1730174194221, 1729580359325 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_hzXr" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_Mtx8" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_hzXr" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_hzXr" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_hzXr" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_E3nE" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Area_Chair_iHYq" ], [ "ICLR.cc/2025/Conference/Submission2325/Authors" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_E3nE" ], [ "ICLR.cc/2025/Conference/Submission2325/Reviewer_Mtx8" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I want to sincerely thank the authors for their well considered rebuttal. I particularly appreciate the additional experimental results provided.\\nI will address the individual points below.\\n\\nRegarding the level of noise in a \\u201cnoisy\\u201d paired label in e.g., CLIP, I understand the distinction the authors are making. To the examples provided in the rebuttal I\\u2019d add that in practice many examples of alt text may also have little to no relation to the semantic content of the image. For example instead of \\u201cThe dog is sleeping\\u201d the alt text may be \\u201c(c) John Doe\\u201d or \\u201cthumbnail\\u201d. Filtering such data to achieve better alignment is a fairly active topic in the field, for example discussed in [1]. Because of this, I personally would still not describe such data as perfectly paired. Still, I believe the difference you aim to stress is that in your work you focus on the mechanisms how even without [any] well aligned data points one can still achieve improvements and that surely is still a different goal.\\n\\nRegarding the additional setting discussed in your rebuttal where you randomize the class ID per iteration, first let me thank you again for adding this, I believe this is directly addresses my concern. I see that in the revised manuscript you specifically mention how the label is now entirely unrelated to the ground truth class label (L374) which I believe significantly strengthens the results you present.\\n\\nRegarding the missing citations, thank you for adding them. \\n\\n\\n[1] Yu, Haichao, et al. \\\"The devil is in the details: A deep dive into the rabbit hole of data filtering.\\\" arXiv preprint arXiv:2309.15954 (2023).\"}", "{\"comment\": \"Dear Reviewer, we sincerely appreciate your worthwhile suggestions, which we believe will significantly strengthen our paper. We will address each points below.\\n\\n\\n\\n**Recommandation for word choices at Remark 2.1**\\n\\n- First, we are grateful for your recommendation to improve the wording of Remark 2.1. We will rephrase the word to use smoother term *generally* rather than overall cases.\\n- We have included this term in the revised manuscript.\\n\\n**Design choices for imperfect supervision $\\\\hat{z}^j$**\\n\\n- Additionally, we appreciate your proposal to include further ablations by varying the design of the imperfect supervision $\\\\hat{z}^j$ at different levels. Following your suggestion, we conducted **additional experiments on ViT-B/32 + RoBERTa**, summarized as follows:\\n - Level 1: Random sentences (Completely imperfect supervision)\\n - Level 2: \\u201cThis is about class #.\\u2019 # \\u2192 random number. (Our and reviewer\\u2019s suggestion previously)\\n - Level 3: Related Supervision (Generated supervision via LLaVA)\\n\\n | | Single Modality | Level 1 | Level 2 | Level 3 |\\n | --- | --- | --- | --- | --- |\\n | Accuracy | 72.39 | 74.40 | 74.92 | 75.03 |\\n\\n- As the results, it still shows improvement We hypothesize that this is because the $M_i$ modality model attempts to learn a comprehensive representation space from $M_j$ modality, consistent with our theoretical findings. \\n- We have included these results in Appendix C.3 for revised manuscript.\\n\\n\\\\\\nOnce again, we thank you for your valuable feedback and insightful suggestions.\"}", "{\"comment\": \"Then, we would like to this discuss about the concerns and questions.\\n\\n**Question 1: Regarding Remark 2.1**\\n- As an answering your questions: \\u201cthis quite directly depend on how far $\\\\hat P_j$ is from $P_j$, as well as how aligned $P_i$ is to $P_j$ to begin with\\u201d, we hypothesized that $\\\\Delta_{ij}$ is greater than \\\\delta. While it may seem counterintuitive, given that models like CLIP align representations across modalities to share semantics, our approach begins with distinct modalities without prior shared representations.\\n- As mentioned in our previous response, we define $\\\\hat{P_j}$ and $P_j$ as the distributions for modality $M_j$, with the hypothesis that $\\\\Delta_{ij}$ is greater than \\\\delta. Since $\\\\hat{P_j}$ and $P_j$ represent \\\"imperfect\\\" and \\\"perfect\\\" supervision within modality $M_j$, both distributions still remain within the representation space of $M_j$. In contrast, $P_i$ represents a distinct modality, likely forming a very different representation space. Thus, we believe that our hypothesis\\u2014that $\\\\Delta_{ij}$ is larger than $\\\\delta$\\u2014represents a general case and is well-founded.\\n\\n**Question 2: Additional results with more noisy settings**\\n- To clarify the \\u201crandom sampling\\u201d in the case of [V \\u2192 A], we shuffled within the entire vision dataset in AVMNIST to create an imperfect supervision setting.\\n- We also acknowledge the reviewer's observation that this approach introduces more imperfection compared to other cases, such as [A \\u2192 V] and [A \\u2192 L]. Based on this feedback, we applied similar settings to the other cases.\\n- For instance, in the [A \\u2192 V] and [A \\u2192 L] cases, we introduced Gaussian noise to each audio data as described in our original version of the paper. Then, we additionally introduced random shuffling across the entire audio dataset to increase imperfection in supervision. Additionally, for [L \\u2192 A] case, we followed a similar process for generating $\\\\hat{z}^j$ as implemented in the [L \\u2192 V] case.\\n- The results, summarized in the table below, show very minimal differences compared to the findings reported in the original paper:\\n \\n | Datasets | **Original $\\\\hat{z}^j$** | **Revised $\\\\hat{z}^j$** |\\n | --- | --- | --- |\\n | IEMOCAP [L \\u2192 A] | 61.29 | 61.20 (-0.09%) |\\n | IEMOCAP [A \\u2192 L] | 56.45 | 56.49 (+0.04%) |\\n | AVMNIST [V \\u2192 A] | 42.44 | 42.44 |\\n | AVMNIST [A \\u2192 V] | 66.56 | 66.69 (+0.13 %) |\\n \\n These findings suggest that our initial approach effectively captures the intended level of imperfection. We have revised the manuscript accordingly to include these additional experiments. (Table 2 & Table 3)\\n\\n**Question 3: Prior approaches using knowledge distillation**\\n- We appreciate the reviewer's observation regarding the similarity of our approach to Knowledge Distillation (KD), where a model for modality $M_j$ teaches a model for modality $M_i$. However, our method differs significantly from traditional KD approaches, such as self-distillation (Furlanello et al., 2018) or De-KD (Yuan Li et al., 2020).\\n- First, KD typically focuses on transferring knowledge through the distillation of classifier outputs, where final knowledge is derived from label supervision. In contrast, our approach emphasizes learning the representation distribution of modality $M_j$ while simultaneously training the representation space of modality $M_i$. As highlighted in our theoretical framework, the core of our method lies in aligning the centers of the representation distributions between the two modalities. This approach ensures that the representations of both modalities, trained independently, are effectively bridged to synergize their latent spaces.\\n\\n\\\\\\n\\\\\\nWe hope these clarification addresses the reviewer's overall concerns.\"}", "{\"comment\": \"We are truly grateful for your insightful recommendations, which we feel will considerably strengthen the impact of our paper. We will address each point below.\\n\\n\\\\\\n**Positioning with respect to other work:**\\n\\n- First, we sincerely appreciate your thoughtful insights and apologize for our earlier response regarding the related works section. Our intention was to emphasize the theoretical aspects and highlight subtle differences from prior works. However, we acknowledge that we did not sufficiently discuss the prior works, and we take full responsibility for this oversight.\\n- To clarify the difference between our approach and prior works, most existing methods focus on training or fine-tuning both modalities simultaneously [1,2], training separate classifiers for each modality [3], training prompts [4], or using a shared encoder [5] to handle missing modalities . In contrast, our method leverages the latent representations from the $M_j$ modality (with frozen parameters) and synergize with the representations of the $M_i$ modality (training from scratch), rather than jointly optimizing both modalities. This means that no additional training is conducted on the $M_j$ modality; it is only used to extract its representation features.\\n- We have added this discussion of related works in revised manuscript.\\n\\n**Ablations of latent loss function**\\n\\n- We also appreciate your valuable suggestion to include additional ablations. Following the reviewer\\u2019s recommendation, we conducted further experiments to compare the performance of different loss functions, including **MSE (as used in our paper)**, **Cosine Embedding Loss**, **Concatenation**, and **Addition**, [6,7] which are commonly used in multimodal learning. The results are summarized below:\\n\\n | Accuaracy | **MSE (Paper)** | **Cosine Embedding Loss** | **Concatenation** | **Addition** |\\n | --- | --- | --- | --- | --- |\\n | IEMOCAP [L \\u2192 A] | 61.20 | 61.20 | 60.68 | 59.68 |\\n | IEMOCAP [A \\u2192 L] | 56.49 | 56.77 | 55.90 | 55.34 |\\n | AVMNIST [V \\u2192 A] | 42.44 | 42.67 | 42.07 | 41.49 |\\n | AVMNIST [A \\u2192 V] | 66.69 | 66.25 | 65.44 | 65.59 |\\n- The results indicate that the **Concatenation** and **Addition** approaches do not show significant improvements compared to either single-modality learning or our proposed method. To further analyze this, we conducted additional experiments comparing the MSE loss for our approach against the Concatenation and Addition methods, as shown in the table below.\\n\\n | Latent Loss | **MSE (Paper)** | **Concatenation** | **Addition** |\\n | --- | --- | --- | --- |\\n | IEMOCAP [L \\u2192 A] | 0.129 | 0.892 | 0.906 |\\n | IEMOCAP [A \\u2192 L] | 0.270 | 0.516 | 0.762 |\\n | AVMNIST [V \\u2192 A] | 0.017 | 0.453 | 0.774 |\\n | AVMNIST [A \\u2192 V] | 0.033 | 0.481 | 0.360 |\\n- Based on these results, we hypothesize that the limitation of the **Concatenation** and **Addition** approaches arises from the convergence behavior of the latent loss. Our method directly minimizes the gap between the latent representations of each modality through targeted optimization. In contrast, **Concatenation** and **Addition** primarily transfer a bias from the $M_j$ modality's latent representation into the $M_i$ modality, without effectively reducing or addressing the latent gap between them. As the result, the overall approach of the **Concatenation** and **Addition** approaches is not well-suited to our proposed framework.\\n- We hope this clarifies our choice of the latent loss.\\n\\n**Scaling Issue**\\n\\n- We sincerely appreciate the reviewer pointing out the scalability issue. We truly apologize for not being able to present additional results now on larger models due to computational resource constraints. This remains an empirical challenge in our current situation and has been left as future work. However, we believe that our theory is not dependent on the scalability issue and may still apply to larger models.\\n- To clearly mention the limitations in scaling, we have added at **Limitation** about this issue in the revised manuscript.\\n\\n[1] Shukor, Mustafa, et al. \\\"Unified model for image, video, audio and language tasks.\\\" TMLR (2023).\\n\\n[2] Liu, Haotian, et al. \\\"Improved baselines with visual instruction tuning.\\\" CVPR 2024.\\n\\n[3] Kim, Donggeun, and Taesup Kim. \\\"Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models.\\\" ECCV (2024).\\n\\n[4] Lee, Yi-Lun, et al. \\\"Multimodal prompting with missing modalities for visual recognition.\\\" CVPR. 2023.\\n\\n[5] Wang, Hu, et al. \\\"Multi-modal learning with missing modality via shared-specific feature modelling.\\\" CVPR. 2023. \\n\\n[6] Tsai, Yao-Hung Hubert, et al. \\\"Learning factorized multimodal representations.\\\" ICLR (2019)\\n\\n[7] Wang, Weiyao, Du Tran, and Matt Feiszli. \\\"What makes training multi-modal classification networks hard?.\\\" CVPR. 2020.\\n\\n\\\\\\nAgain, thank you for your valuable and intuitive insights.\"}", "{\"comment\": [\"Thanks for the additional experiments and clarifications. Given that the authors have addressed part of the concerns, I will not reduce my score. However, I find the authors response not convincing enough to increase my scores. For instance:\", \"**Positioning with respect to other work:** even if prior works addressing similar research problems rely on \\\"extensive empirical validations\\\" and \\\"not showing the theoretical analysis\\\", this is not a good reason to not discuss and position the paper with respect to these works.\", \"**L2 loss:** I see the authors tested another loss to validate their work. However, my suggestions was about doing architectural changes (\\\"Other design choices can be explored. For example, concatenation, addition or other multimodal features fusion techniques\\\"). Is using an additional auxiliary loss the only way to leverage other modalities? If this is not the case the reader should expect a convincing theoretical or empirical response.\", \"**Scaling:** The current scale of the experiments does not validate the scalability to recent multimodal models. I see the authors points about focusing on the theoretical study and the lab constraints to scale more. But I think the scalability is still a concerns\"]}", "{\"title\": \"Overall Comments: Summary of revisions to the original paper\", \"comment\": [\"Dear all reviewers, we truely appreciate your feedbacks. Here, we first provide what we revised on the original paper based on your thoughtful comments. (We have marked **all revised part in blue** for the clarity.)\", \"**For Reviewer hzXr:**\", \"We have **updated** the results in **Table 1**.\", \"We have revised **Table 2 and Table 3** to address your comments.\", \"**Missing references to related works** on unpaired multimodal settings have been included in **Section 2.3**.\", \"We have provided modified description on **Additional Settings $\\\\textit{(how to get $\\\\mathbf{\\\\hat{z}^j}$)}$** to **clarify the imperfect and noisy supervision**.\", \"We have added the word 'generally' to Remark 2.1 to improve the wording for smoother condition.\", \"We have included additional experimental results on **design choices for imperfect supervision $\\\\hat{z}^j$ in Appendix C.3.**\", \"**For Reviewer E3nE:**\", \"We haved **changed notations** on **Figure 2**.\", \"**For Reviewer Mtx8:**\", \"We have **updated missing related works** on unpaired multimodal settings on **Section 2.3**.\", \"We have conducted additional experiments using **different loss terms**, as presented in **Table 6 in Appendix C.1**.\", \"We also provide additional experiment results for **larger model configurations** in **Table 7 in Appendix C.2.**\", \"We have **included further related works and filled in missing details in Section 2.3.**\", \"The **scalability issue** has been addressed in the **Limitation section (Section 6: Further Discussion).**\", \"We provided additional experiements results with analysis for other design choices for latent loss in Appendix C.4\"]}", "{\"comment\": \"Dear reviewer, we sincerely thank you for your insightful suggestions and thoughtful concerns. Your feedback has been invaluable in helping us improve weaknesses and strengthen our paper. We appreiciate for the time and expertise you have invested in reviewing our work.\\n\\nThank you for your valuable contribution to the peer review process.\"}", "{\"title\": \"A Kind Reminder [Deadline Approaching]\", \"comment\": \"**Dear Reviewer Mtx8,**\\n\\nIt has been a privilege to refine our paper based on your insightful feedback. We sincerely hope that our previous responses, particularly regarding the design of loss functions, have addressed your concerns, if not entirely, then at least in part. As the deadline for your feedback approaches in approximately one day ```(deadline Dec. 2nd)```, and with two days remaining for us to submit our responses ```(deadline Dec. 3rd)```, we kindly offer this reminder of the timeline. If you have any additional feedback or suggestions, we would be truly grateful to receive them within given period.\\n\\nThank you once again for your time and dedication in reviewing our work.\"}", "{\"comment\": \"Thank you very much for your thoughtful responses.\\n\\nRegarding the response to question 1, I agree that particularly in the case you mention where M_i and M_j are distinct modalities without prior alignment, this assumption would almost surely hold in practice, as I also indicated in my original review. I was only suggesting that it may be possible to construct a case where this may not be true. If that possibility exists, I think it may be better to slightly soften the language in the manuscript to allow for that, which I think can be done without reducing the claims in practice. Concretely, I think this could be as simple as to change \\u201c[\\u2026] \\u03b4 will be much smaller than both [\\u2026]\\u201d to something like \\u201c[\\u2026] \\u03b4 will generally be much smaller than both [\\u2026]\\u201d, i.e. just add a \\u201cgenerally\\u201d to correspond to the \\u201cunlikely\\u201d in the previous part of that sentence. In my opinion this just allows for the possibility of outliers / adversarial cases without changing the overall statement.\\n\\nRegarding the response to question 2, again I want to thank the authors for the additional experiments conducted. Conducting such ablations under the timelines of a review period is surely not easy and the effort is deeply appreciated. Again, I believe the results significantly strengthen the contributions. Similar to the ablation with LLaVA captions you included in the paper, I\\u2019d even suggest that how little difference the choice of noising function itself has on results is in its own a bit noteworthy, complementing and strengthening the insights from table 4. Since you have all the experimental results already, it may be worthwhile to add them to the appendix? But just a suggestion.\"}", "{\"comment\": \"(continue)\\n\\n**Weakness & Question 3: Additional attempts with larger architectures**\\n\\n- Given the limited computing resources of our group, we have tried to conduct experiments using architectures as large as possible by adopting a Vision Transformer (ViT-B/16, ViT-B/32) from scratch, paired with RoBERTa-large. Compared with CLIP, suggested by the reviewer, we believe that the ViT versions used in our paper are sufficiently complicated architectures.\\n- As shown in the following table, the comparison of FLOPs and throughput for each ViT model demonstrates that these models are comparable in scale to CLIP.\\n\\n| | **ViT-B/32 + RoBERTa** | **ViT-B/16 + RoBERTa** | **CLIP-B/32** | **CLIP-B/16** |\\n| --- | --- | --- | --- | --- |\\n| FLOPs | $1.2\\\\times10^9$ | $2.6 \\\\times10^9$ | $0.7\\\\times10^9$ | $2.0\\\\times10^9$ |\\n| img / sec | $2708.39$ | $1539.53$ | $3301.29$ | $1911.55$ |\\n- However, we acknowledge the importance of addressing scalability concerns. Therefore, we have tried test with larger model ViT-L/16 + RoBERTa, which represents the upper limit of our lab-scale computing resources. This configuration has approximately 5 times the FLOPs of the largest model used in our original paper, ViT-B/16+RoBERTa. The results of training ViT-L/16 and ViT-L/16 + RoBERTa are provided below:\\n\\n| | **IN** | **V2** | **Rend.** | **Sketch** | **A** | **Style** | **C ($\\\\downarrow$)** |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| ViT-L/16 | 78.31 | 68.99 | 49.03 | 37.88 | 29.14 | 23.25 | 46.39 |\\n| ViT-L/16 + RoBERTa | 80.71 | 70.62 | 52.50 | 40.45 | 31.85 | 27.47 | 41.61 |\\n\\n- The key aspect of our approach is its focus on addressing the fundamental question from a theoretical perspective. Consequently, our approach shows improvements in all results, even in larger model. demonstrates improvements across all results, even when using larger models. This outcome indicates that our approach is not reliant on model scalability. We have updated the manuscript at the appendix C to include these additional experiments accordingly.\\n- Moreover, larger language models (LLMs) such as LLaMA, which are typically decoder-only models, differ in their fundamental design from our approach. Decoder-only LLMs are primarily optimized for generation tasks, whereas our work emphasizes obtaining high-quality representations from the language modality (e.g., the $M_j$ modality), rather than generative decoding process. Given this distinction, we believe the experimental results we present offer valuable insights and provide sufficient evidence to support our hypothesis.\\n- Again, our work primarily focuses on the theoretical and practical aspects of multimodal learning rather than the scalability of individual models.\\n\\n\\\\\\n\\\\\\nThank you for highlighting these thoughtful concerns. We hope our responses have provided the clarity you were seeking.\"}", "{\"comment\": \"Dear reviewer, we extend our sincere gratitude to the reviewer for their insightful comments and valuable suggestions. We would like to take this opportunity to address the concerns and weaknesses you raised.\\n\\n**Weaknesses**\\n\\n**Weakness 1: More complex language supervision**\\n- As demonstrated in Table 4 and Appendix B, we utilized captions generated by LLaVA and applied them to our method, resulting in more \\\"perfect\\\" representations compared to the simplified language prompts used in our initial experiments (see the column with $z^j$ without \\u2018hat\\u2019). While this refinement led to a slight improvement in performance, the gain was not substantial. This finding highlights a key contribution of our work: even with imperfect supervision for $M_j$ modality, it remains possible to effectively synergize and enhance the training of models for the $M_i$ modality.\\n- By doing so, we tested the two extreme cases: the basic supervision (\\\"This is about Class #,\\u201d used in our experiments) and the rich supervision (from LLaVA), and they showed a minimal gap. This implies that a case between \\u201cbasic\\u201d and \\u201cLLaVA,\\u201d i.e., moderately complicated supervision, would show a similar performance with a minimal difference.\\n\\n**Weakness 2: Consideration of Assumption 1**\\n- We assumed that $\\\\Delta_{ij}$ might not converge to 0 because models trained individually on different modalities may not fully share their representation spaces. For example, even in CLIP, which aims to align two representations from different modalities, the representation spaces for different modalities are not identical [1].\\n- Furthermore, as illustrated in Figure 3, we empirically visualize and compute the Wasserstein Distance, demonstrating how distinct these spaces remain.\\n\\n[1] Interpreting and Analyzing CLIP\\u2019s Zero-Shot Image Classification via Mutual Knowledge\\n\\n\\n**Questions** \\n- We have changed the notations on Figure 2, $Z_i \\\\rightarrow Z_j$.\\n\\n\\\\\\n\\\\\\n We hope our responses have addressed your concerns and provided the necessary clarifications.\"}", "{\"summary\": \"The paper aims to answer whether imperfectly aligned paired data from other modalities can help learning in a multimodal setting.\\nSpecifically, the authors propose an additional latent loss, to directly align the target modalities' latent representation with that of the output of a pre-trained encoder of the secondary (supportive) modality. The authors introduce and study a theoretical framework and show that even imperfect paired data can help approximate a hypothetical, perfectly aligned representation. They further demonstrate empirically that the additional latent loss led to stronger performance of the target modalities' encoder across various tasks and modalities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work considers a range of relevant modalities and conducts experiments across language, vision, and audio in various cross-combinations.\\nThe mathematical framework introduced is intuitive and easy to follow.\", \"weaknesses\": \"The paper asserts in multiple locations that prevailing multimodal learning methods require \\\"perfectly paired datasets\\\" (quote from the introduction) between modalities. This, in my opinion, is not accurate an accurate representation of the thinking in the field. The authors cite CLIP as a notable multimodal model, which famously is trained on noisy web-scale paired data in the form of image alt text. Training multimodal models on noisy labels such as alt-text from web-scraped images is common practice and widely established (Radford et al. 2021, Dosovitskiy et al. 2020.,...). While improving the alignment of the training modalities is generally seen as desirable (e.g. Fang et al., 2023), \\\"perfection\\\" seems not a requirement.\\nThe paper does not cite and / or discuss other works in the space of aligning multiple modalities without direct paired supervision data, including popular works such as ImageBind (Girdhar, et al., 2023) or 4M (Mizrahi et al., 2024). These works do not (solely) rely on paired multimodal data, seemingly directly addressing the limitations discussed by this work in section 2.3.\\nThis, together with arguably understating how much alignment on noisily paired data has been previously studied in the field, arguably limits the novelty of this work.\\n\\nOne of the main statements of this work is that the label does not need to be perfectly paired / can be noisy. However, the transformations to introduce this noise studied in the work may not be sufficiently realistic. For example for the L -> V case, the noisy label \\\\hat z^j is constructed by embedding the text \\\"This is about (Class|Emotion) #.\\\" as per table 5. In terms of the measured downstream task, which is classification, this label is arguably not noisy, but perfectly represents the target task. In table 4 it is shown that changing this supervision signal with a caption produced by LLaVA leads to only minor improvement, which may not be surprising in this setting. (See the Questions section for suggestions around this.)\", \"questions\": \"Regarding Remark 2.1. \\\"\\u03b4 does not hinder the synergy\\\", you say that\\n\\\"We can extract an imperfect feature representation from Pj by giving\\nimperfect input to the modality Mj . This allows \\u02c6zj exist in the distribution Pj 2. Consequently, \\u02c6zj\\nis closer to or part of the latent space of the Mj than to that of the Mi or the true latent space.\\\"\\n\\nWouldn't this quite directly depend on how far \\\\hat P_j is from P_j, as well as how aligned P_i is to P_j to begin with? Surely one could construct counter examples with adversarially chosen \\u03b4? This may not be much of a practical concern for reasonably close \\\\hat P_j but is this statement not a bit strong in the general case?\\nExpanding on this, perhaps this could be empirically verified by exploring different levels of noise to introduce in \\\\hat z^j, particularly in the L -> V task as suggested above. Have you perhaps already considered / explored different noising functions and compared their impact?\\n\\nFor the V -> A case in AVMNIST, you say \\\"For the [V\\u2192A] case with AVMNIST, we use randomly shuffled images from AVMNIST as \\u02c6zj in audio classification tasks\\\". Could you clarify the random sampling in this case? Is it a random image from the entire dataset or a random image from the samples of the same target class? If it is a random image (i.e. unrelated to the paired modality at all), this seems significantly more \\\"noise\\\" than in other settings, it'd be great to understand a bit better to understand the motivation for this choice and perhaps similar experiments for other modalities.\\n\\nOn a general level, if we assume that the target distribution for a modality encoder g_i is similar to the one of a pre-trained encoder g_j of a different modality, the proposed latent alignment loss has some similarity to knowledge distillation. In this field there's been some notable prior work that suggests that the success of KD is partially attributable not only to a superior knowledge of the teacher but also to benefits of the training strategy itself. Notably, Born Again Networks (Furlanello et al., 2018) suggests a simple strategy of self-distillation can improve performance. Yuan Li et a., 2020 explores this further in \\\"Revisiting Knowledge Distillation via Label Smoothing Regularization\\\". This is relevant to this work since it could suggest a different mechanism leading to the empirically observed improvements that is less about multimodal transfer and perhaps more about a sort of regularization effect of the added latent loss. Has this been considered?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I again thank the authors for all the improvements made. I have raised my score to 6, recommending acceptance.\"}", "{\"comment\": \"Thanks for your response. The rebuttal addressed my concerns and I will maintain my positive evaluation.\"}", "{\"comment\": \"Dear reviewer, we greatly appreciate the time and effort you have dedicated to evaluating our work. Your thoughtful guidance has significantly improved our work, and we are sincerely grateful for your valuable contribution to our academic endeavor.\\n\\n\\\\\\nThank you for your significant contribution to the peer review process.\"}", "{\"comment\": \"Dear reviewer, we deeply appreciate the reviewer's thoughtful feedback and constructive recommendations. Let us first address the concerns and insightful weaknesses & questions you raised.\\n\\n**Weaknesses & Questions:**\\n\\n**Weakness & Question 1: Comparisons to the suggested prior works**\\n- While prior works, such as UniVal [1] and LLaVA [2] as suggested by the reviewer, have made significant advancements in leveraging multimodal learning with unpaired datasets, these contributions primarily rely on extensive empirical validations, not showing the theoretical analysis. Also, each work focuses on particular pairs of modalities. Our work emphasizes the theoretical answer to the fundamental question: \\\"How can multimodal datasets synergize with one another?\\\" even when paired datasets are imperfect. Our theory works are not limited to particular modalities (we do not assume a certain modality in our theory) and specific methods to make the pair imperfect (we use the general notations of the imperfect paired modality $\\\\hat{z}^j$ in our theory).\\n- We hope to emphasize that our theory can be a fundamental tool for understanding how the existing works, including your suggested prior works, work well under unpaired supervision.\\n- We appreciate the reviewer\\u2019s valuable observation regarding the insufficient discussion of prior research on unpaired or missing modalities, such as [3], [4], and [5], in our related works section. In the revised manuscript, we ensure that these studies are thoroughly incorporated and discussed in the revised version of the paper. We have updated our paper accordingly and appreciate the reviewer for bringing this to our attention.\\n\\n**Weakness & Question 2: Ablations on the loss term**\\n- Our choice of L2 loss is intended to directly minimize the difference between the modality's representation space, ensuring semantic consistency. That said, we fully acknowledge the reviewers' suggestion to explore alternative latent loss functions, which could provide additional insights into the effectiveness of different approaches.\\n- To address this, we have additionally tested the case with cosine embedding loss as the latent loss function, i.e., measuring the distance between two features from different modalities via cosine distance. As presented below, the results demonstrate the comparative performance across these methods, highlighting the minimal sensitivity for the form of loss term. We hope this addition provides a clearer understanding of the ablation studies on the latent loss term.\\n- We want to emphasize that our work focuses on the theoretical understanding of the synergy between two modalities. Thus, we aim to use a simple yet direct loss term to align two modalities.\\n \\n | **Datasets** | **MSE (Paper)** | **Cosine Embedding Loss** |\\n | --- | --- | --- |\\n | ImageNet [L \\u2192 V] | 79.58 | 79.61 (+0.03%) |\\n | IEMOCAP [L \\u2192 A] | 61.20 | 61.20 (-0.00%) |\\n | IEMOCAP [A \\u2192 L] | 56.49 | 56.77 (+0.28%) |\\n | AVMNIST [V \\u2192 A] | 42.44 | 42.67 (+0.23%) | \\n | AVMNIST [A \\u2192 V] | 66.69 | 66.25 (-0.44%) |\"}", "{\"comment\": \"Dear reviewer, we sincerely thank for the thoughtful perspectives and insights. We would like to begin by addressing the concerns you raised regarding the weaknesses in our work\\n\\n**Weaknesses**\\n\\n**Weakness 1: About \\u201cperfect vs. imperfect\\u201d paired modalities**\\n- First, we acknowledge the opinion that the existing multimodal training is basically with a \\u201cnoisy\\u201d paired label, e.g., CLIP with web-scraped paired samples. However, we want to point out that the prior multimodal approaches rely on pairs of modalities that provide each other with detailed or simplified semantics. For example, consider an image of a poodle sleeping on a sofa. Then, for instance, the web-scraped textual descriptions can be \\u201cThe poodle is on a sofa\\u201d as a detailed description, \\u201cThe poodle is sleeping,\\u201d or even \\u201cThe dog is sleeping\\u201d as a simplified description. While differing in detail, all these descriptions convey the essential semantic content\\u2014that a dog or poodle is present\\u2014preserving the primary object and concept in the image. That is why we referred to such descriptions as \\\"perfect\\\" because they retain critical semantic elements relevant to the main object.\\n - Conversely, we used \\\"imperfect\\\" to denote cases where descriptions lack meaningful semantic information about the primary object. For instance, instead of identifying a \\\"dog,\\\" a description might label it as \\\"class 1,\\\" \\\"class 2,\\\" or \\\"class N.\\\" These labels, while potentially relevant in specific contexts, do not inherently convey any information about a dog. In this sense, \\\"imperfect\\\" descriptions provide minimal semantic richness. For comparison, \\\"noisy supervision\\\" (e.g., LAION-400M) still contains more contextual information than what we define as \\\"imperfect\\\" descriptions.\\n- Also, as you pointed out, in the [L\\u2192V] experiments, \\\"This is about Class #.\\\" probably provides the information for visual tasks because the different index (#) can be a clue for discriminating different objects for visual classification tasks. To address the reviewer's concern directly, we expanded our experiments to include a more complex setting for comparison. Specifically, we imposed textual descriptions such as \\\"This is about Class #,\\\" where \\\"#\\\" is randomly assigned to fixed classes across iterations. For instance, the description of a dog could be \\\"Class 1\\\" in the first iteration and \\\"Class 1423\\\" in the next. Also, # can be chosen from an arbitrarily wider range of integers than the number of the existing image categories for the downstream visual tasks, which means that the textual description does NOT provide information about the visual semantics or tasks. The results of these experiments are summarized as follows.\\n| **ImageNet 1K [L \\u2192 V]** | Original **$\\\\hat{z}^j$** | **Revised $\\\\hat{z}^j$** |\\n| ----------------------- | ----------------------- | ----------------------- |\\n| ResNet-50 + RoBERTa | 76.77 | 76.90 (+0.13%) |\\n| ViT-B/32 + RoBERTa | 74.97 | 74.92 (-0.05%) |\\n| ViT-B/16 + RoBERTa | 79.54 | 79.58 (+0.04%) | \\n\\n- The performance changes are minimal despite the coarse (imperfect in our notations) paired texts. The results appear consistent with our key findings, emphasizing, \\u201cEven an imperfect supervision can synergize the other modalities.\\u201d\\n- We think it is hard to say that our paired supervision is \\u201cperfect\\u201d or \\u201cnot noisy.\\u201d Also, the minimal gaps in Table 4 do not imply the perfection of our pairing but emphasize that imperfect supervision is sufficient to promote the training of other modalities.\\n - We hope this clarifies our intended use of these terms and provides additional context to address any concerns. Additionally, we have revised the related experiments for [L\\u2192V] cases in the revised manuscript. We will revise the additional experiments (e.g., ViT-B/32 + BERT, ViT-B/16-BERT, ResNet-50 + BERT) on Table 1 as soon as the results become available. \\n- We appreciate your thoughtful review and hope to hear your feedback during the discussion period.\\n\\n**Weakness 2: About the missing related works and our emphasis on theory**\\n- We apologize for not citing prominent works like ImageBind or the 4M paper, which demonstrated impressive performance without relying on directly paired datasets. Our emphasis beyond these works is theoretically answering the foundation question: \\\"**How can one modality model effectively promote training the other modality across diverse settings, even with 'noisy' or 'imperfect' supervision?**\\\" Our theory can explain how the existing multimodal methods, including ImageBind and 4M, and our experiments work well. While many recent works have achieved surprising empirical results in multimodal learning, relatively few have investigated its theoretical viewpoint. Therefore, we aim to address this gap.\\n- We have incorporated the suggested works into our revised paper and strengthened our theoretical novelty to provide a more comprehensive view.\"}", "{\"metareview\": \"This paper investigates whether one modality model can enhance the training of another modality model without requiring paired multimodal supervision. The authors propose both theoretical and empirical evidence that imperfect supervision from one modality (e.g., language) can improve the performance of another modality (e.g., vision). The analysis reveals that interpolated representations between two modalities can outperform single-modality representations, even with imperfect cross-modal supervision, and they empirically validate this work through experiments across vision, language and audio modalities with consistent improvements.\\n\\nReviewers were split with 1 marginal reject, 1 marginal accept, and 1 accept. They generally found the work interesting and useful, with strong results across language, vision, and audio in various cross-combinations, and that the theory is intuitive and easy to follow.\\n\\nKey weaknesses pointed out by the reviewers include that more comparisons with prior work, that using an unpaired modality to improve performance had been explored, larger-scale experiments, and more ablation studies. From what I've seen, the authors have provided additional experiments on all of these concerns, and I am satisfied with them, so I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer hzXr raised from 5 to 6 after the discussion since they felt their main concerns had been addressed, which I agree. Most of these concerns were comparisons with prior work and clarifications on the weak pairings/noisy pairing settings. Reviewer E3nE maintained their score of 8.\\n\\nReviewer Mtx8 maintained their score of 5. Their main concerns were that the main contribution\\u2014showing that using an unpaired modality can improve performance\\u2014has already been explored in prior works, requesting more discussions wrt prior work, requests for ablations of loss functions beyond L2 loss to interpolate modality representations, more architectural ablations to fuse modalities, and increasing the scale of the models in experiments. From what I've seen, the authors have provided additional experiments on all of these concerns, and I am satisfied with them.\"}", "{\"title\": \"To add further comments about L2 loss\", \"comment\": \"First, thank you again for the thoughtful feedback and insights. We would like to provide additional comments regarding the latent loss function and its comparison to alternative design choices, particularly focusing on the L2 loss.\\n\\n**Further Comments on the L2 Loss**\\n\\n- We have included experimental results showing both accuracy and the gap (MSE) between $\\\\hat{z}^k$ and $\\\\hat{z}^j$ (refer to the updated tables). Here, we aim to analyze in more detail why alternative design choices, such as concatenation and addition, may not align well with our proposed approach.\\n\\n- Our approach is grounded in the theoretical principle of embedding the representation space into an interpolated space between the $M_i$ and $M_j$ modalities, achieved through the latent loss $\\\\mathcal{L}_z$ with interpolation term $\\\\alpha$. Unlike our method, concatenation and addition do not include any mechanism to explicitly reduce the gap between $\\\\hat{z}^k$ and $\\\\hat{z}^j$. In contrast, our method incorporates representations from both modalities directly into the loss function, which helps minimize this gap. Therefore, we believe that using L2 loss (or cosine embedding loss) is more suitable for our framework compared to concatenation or addition.\\n\\n- To address this point in detail, we have added the above analysis to Appendix C.4 in the revised manuscript\\n\\n\\\\\\nWe sincerely appreciate the opportunity to consider and reflect on alternative approaches. Thank you for your valuable suggestions, which have helped us strengthen our work.\"}", "{\"summary\": \"This paper investigates whether one modality model can enhance the training of another modality model without requiring paired multimodal supervision. The authors propose both theoretical and empirical evidence that imperfect supervision from one modality (e.g., language) can improve the performance of another modality (e.g., vision). They establish mathematical foundations showing that an interpolated representation between two modalities can outperform single-modality representations, even with imperfect cross-modal supervision. The work is validated through extensive experiments across vision, language and audio modalities, demonstrating consistent performance improvements. For example, in the vision domain, they show improvements of 1.5-2.5% on ImageNet classification and similar gains on robustness benchmarks by leveraging simple language prompts during training.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Novel insights: The work challenges common assumptions about requiring paired supervision for multimodal learning and demonstrates unexpected improvements using single-modality pre-training and synergies between seemingly unrelated modalities.\", \"Strong theoretical foundation: The paper provides rigorous mathematical proofs for how and why cross-modal learning can work without paired supervision, establishing bounds on the interpolation coefficient $\\\\alpha$ and showing the existence of superior interpolated representations.\", \"Comprehensive empirical validation: The authors demonstrate their approach across multiple modality pairs (Vision-Language, Vision-Audio, Language-Audio) and various architectures, showing consistent improvements across different settings and tasks. Results include not just standard classification metrics but also out-of-distribution generalization and robustness benchmarks, showing broad improvements across different evaluation criteria.\"], \"weaknesses\": [\"Other language supervision: The language prompts used (e.g., \\\"This is about Class #\\\") are quite basic. It would be interesting to see how the method performs with more complex or varied language supervision.\", \"Theoretical assumptions: Some theoretical assumptions (e.g., Assumption 1 about $\\\\Delta_{ij} \\\\ge 0$) could benefit from more empirical validation or discussion of when they might not hold.\"], \"questions\": [\"Figure 2 right, $Z_i$ should be $Z_j$.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper demonstrates that performance on a target modality can be improved by leveraging another modality, even without paired samples. Both theoretical and empirical evidence are provided to support this claim, showcasing the method's effectiveness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles an important problem: obtaining paired data is often challenging in many setups.\", \"The approach is backed by both theoretical analysis and empirical results, showing clear improvements over baseline methods.\", \"The paper is well-structured, with illustrations that help clarify key messages.\"], \"weaknesses\": \"* The main contribution\\u2014showing that using an unpaired modality can improve performance\\u2014has already been explored in prior works. For instance, some studies demonstrate leveraging unpaired modalities through pretraining on one modality and fine-tuning on another, like from image to video to audio [1], or from text to image-text [2]. Additionally, the issue of handling unpaired or missing modalities has been addressed before, yet the paper does not discuss relevant works in this domain [3,4,5]. Including this discussion would better position the paper.\\n\\n* It is not clear why the author decides to leverage other modality through an l2 loss between the features spaces. Other design choices can be explored. For example, concatenation, addition or other multimodal features fusion techniques.\\n\\n* The experiments use relatively small models on classification tasks. It remains unclear whether the proposed method would be effective on larger, more complex, maybe generative models (e.g., Multimodal LLMs, CLIP).\\n\\n[1] Shukor, Mustafa, et al. \\\"Unified model for image, video, audio and language tasks.\\\" TMLR (2023).\\n\\n[2] Liu, Haotian, et al. \\\"Improved baselines with visual instruction tuning.\\\" CVPR 2024.\\n\\n[3] Kim, Donggeun, and Taesup Kim. \\\"Missing Modality Prediction for Unpaired Multimodal Learning via Joint Embedding of Unimodal Models.\\\" ECCV (2024).\\n\\n[4] Lee, Yi-Lun, et al. \\\"Multimodal prompting with missing modalities for visual recognition.\\\" CVPR. 2023.\\n\\n[5] Wang, Hu, et al. \\\"Multi-modal learning with missing modality via shared-specific feature modelling.\\\" CVPR. 2023.\", \"questions\": \"Please check the weaknesses section, in particular the question about how leveraging other modalities is done.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5BSlakturs
Enhancing Compositional Text-to-Image Generation with Reliable Random Seeds
[ "Shuangqi Li", "Hieu Le", "Jingyi Xu", "Mathieu Salzmann" ]
Text-to-image diffusion models have demonstrated remarkable capability in generating realistic images from arbitrary text prompts. However, they often produce inconsistent results for compositional prompts such as "two dogs" or "a penguin on the right of a bowl". Understanding these inconsistencies is crucial for reliable image generation. In this paper, we highlight the significant role of initial noise in these inconsistencies, where certain noise patterns are more reliable for compositional prompts than others. Our analyses reveal that different initial random seeds tend to guide the model to place objects in distinct image areas, potentially adhering to specific patterns of camera angles and image composition associated with the seed. To improve the model's compositional ability, we propose a method for mining these reliable cases, resulting in a curated training set of generated images without requiring any manual annotation. By fine-tuning text-to-image models on these generated images, we significantly enhance their compositional capabilities. For numerical composition, we observe relative increases of 29.3\% and 19.5\% for Stable Diffusion and PixArt-$\alpha$, respectively. Spatial composition sees even larger gains, with 60.7\% for Stable Diffusion and 21.1\% for PixArt-$\alpha$.
[ "Diffusion models", "text-to-image generation" ]
Accept (Spotlight)
https://openreview.net/pdf?id=5BSlakturs
https://openreview.net/forum?id=5BSlakturs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xKfqtw6RIw", "wk4R0UlQIw", "uYsgmk6P6b", "sNZytrwPeN", "oSuyG4Y6TE", "nxZPmeBM3Y", "nbosX6P8v1", "lhLBiSgG0u", "l0H3y9mwMC", "koTig1UAj6", "dOCGWf8d5u", "dD5UxDkabi", "Vf34L6lTHc", "NX9PMfHjWI", "2OH2ACSSy8", "2A8pWbdTaT", "0M6JqMZmYc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732570239393, 1732666403423, 1732666188348, 1730622151953, 1732666234039, 1732621138806, 1732959659531, 1732634686850, 1732666393001, 1732884123268, 1732873778219, 1737523503964, 1734362412608, 1732945796356, 1732570272069, 1730574007329, 1730280869334 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_ZEiT" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_uQGW" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_ZEiT" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2445/Area_Chair_VnnN" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_uQGW" ], [ "ICLR.cc/2025/Conference/Submission2445/Authors" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_uQGW" ], [ "ICLR.cc/2025/Conference/Submission2445/Reviewer_MsjD" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate the reviewer's thoughtful feedback and their positive assessment of our method's simplicity, broad applicability, and experimental validation. Below we address each point raised in the Weaknesses and Questions sections:\\n\\n---\\n\\n> **Comparison with Ranni [1].**\\n\\nWe thank the reviewer for bringing Ranni to our attention. Similar to LMD, Ranni employs a two-stage generation process: text-to-panel and panel-to-image. We have conducted comparisons with Ranni (implemented on SD2.1 768x768) on our dataset. As shown in the updated Table 4, while Ranni achieves comparable numerical and spatial accuracy (50.7% vs. our 51.3%, and 35.5% vs. our 36.6%), it does so at a significantly higher cost to aesthetic quality and recall (Recall: 46.6 vs. our 71.3; Aesthetic Score: 4.43 vs. our 5.13). A qualitative comparison has also been added in Appendix A.7. We also would like to note that Ranni uses additional network structures for conditional input, requires extensive fine-tuning (40k steps vs. our 5k), and relies on a much larger dataset (50M samples vs. our 2.5k). These distinctions highlight the efficiency of our method in achieving competitive performance with fewer resources.\\n\\n\\n\\n| Method | **Numerical** | | | **Spatial** | | |\\n|----------------------------------------------|-----------------------------|-----------|-----------|-----------------------------|-----------|-----------|\\n| | Acc \\u2191 | Aes \\u2191 | Rec \\u2191 | Acc \\u2191 | Aes \\u2191 | Rec \\u2191 |\\n| Stable Diffusion 2.1 (768x768) | 37.5 | **5.19** | **76.7** | 17.8 | **5.38** | **70.4** |\\n| + sampling with reliable seeds | 43.0 | 5.23 | 73.9 | 23.4 | 5.35 | 69.1 |\\n| + fine-tuning (random) | 41.8 | 5.13 | 70.9 | 22.0 | 5.25 | 69.9 |\\n| + fine-tuning (reliable) | 48.5 | 5.12 | 70.5 | 28.6 | 5.24 | 66.9 |\\n| + fine-tuning (random + rectified) | 49.2 | 5.13 | 72.0 | 32.0 | 5.05 | 64.5 |\\n| + fine-tuning (reliable + rectified) | **51.3** | 5.13 | 71.3 | 36.6 | 5.06 | 66.7 |\\n| + LMD [Lian et al., 2023] | 35.8 | 4.65 | 49.4 | **51.9** | 4.77 | 44.2 |\\n| + MultiDiffusion [Bar-Tal et al., 2023] | 29.2 | 4.40 | 36.2 | 51.4 | 4.19 | 39.6 |\\n| + Ranni [Feng et al., 2024b] | 50.7 | 4.43 | 46.6 | 35.5 | 4.38 | 28.4 |\"}", "{\"comment\": \"> **Limited generalization testing to other diffusion models beyond Stable Diffusion and PixArt-alpha. The method's adaptability is unclear**\\n\\n Thank you for your comment. We must note that we test our proposed method on both Stable Diffusion (SD) and PixArt-alpha, while previous research has primarily focused on improving Stable Diffusion alone (LMD, MultiDifussion, and Ranni [1]). We chose these two models because they represent distinct architectures: Stable Diffusion, a U-Net based diffusion model, and PixArt-alpha, a Transformer-based diffusion model. We believe testing on these two models sufficiently demonstrates our method's ability to generalize across different generative architectures, ensuring its robustness and applicability.\\n\\n [1] Feng, Yutong, et al. \\\"Ranni: Taming text-to-image diffusion for accurate instruction following.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\"}", "{\"comment\": \"We appreciate the reviewer's thoughtful feedback and their positive assessment of our method's simplicity and usefulness. Below we address each point raised in the Weaknesses and Questions sections:\\n\\n> **How do we decide if an image is correctly generated in Figure 4? Does this process bring biases?**\\n\\nWe utilized CogVLM2 to assess whether an image is correctly composed (details provided in Section 4.2 and Appendix A.3.1). This is an automatic approach, and we agree with the reviewer that a discussion about potential biases is necessary, which we have added in A.3.1. In our particular case, we have spent significant effort to manually verify the correctness of CogVLM2 and found no specific biases in the model. Additionally, we manually reviewed the 600 images generated to create Figure 4 (top) and found very few assessment errors from CogVLM2. The model performed well across various categories, backgrounds, and object arrangement patterns, including scenarios with overlapping objects.\\n\\n\\n> **Does CogVLM2 result in biases, for example, preferency towards non-overlapping layout? Are the fine-tuned models overfitted to generating images with non-overlapping layouts?**\\n\\nWe would like to clarify that many generated images with overlapping object layouts are correctly annotated by CogVLM2. To illustrate this, we provide several examples in Figure 8 of Appendix A.3.1 (in the revised paper). Figure 4 seems to highlight layouts with non-overlapping objects because the pre-trained Stable Diffusion and PixArt-alpha models tend to generate such images more frequently, not because of a bias in the evaluation model. Notably, our fine-tuned model does not overfit to a specific mode, as demonstrated by its significantly higher recall scores compared to all other baselines (Table 4). \\nTo illustrate that the fine-tuned models are not restricted to generating non-overlapping layouts, we showcase examples of images with object occlusions generated by our fine-tuned Stable Diffusion model in Figure 9 of Appendix A.3.1. We would like to point out that this is another strength of our method compared to layout-to-image methods such as LMD and MultiDiffusion, which rely on non-overlapping bounding boxes as input constraints.\\n\\n\\n\\n\\n> **Which layer is used for the heatmap in Figure 4?**\\n\\nWe follow the established convention in the field (e.g., [2], [3]) by computing the average of the attention maps across all timesteps and all cross-attention modules within the UNet model. \\n\\n\\n> **The four coins can be parallel or any position arrangement. Why does the heatmap for \\\"four coins\\\" in Figure 4 is coincidently splited the 4 grids?**\\n\\n We would like to clarify that this heatmap represents the behavior of the original pre-trained diffusion model. The pattern in Figure 4 occurs because the prompt \\\"four coins\\\" is most often composed when arranged in a 2x2 grid. This is likely influenced by the pre-training data, which contains many examples of \\\"four\\\" arranged in this way. However, this is not the only pattern the model can generate; other arrangements, such as parallel ones, as the reviewer suggests, are possible but less representative. The plot in Figure 4 does not emphasize these less frequent cases.\\n \\n\\n> **Overlapped result is hard to generate?**\\n\\nWe would like to note that our seed selection method does not favor any particular types of images, whether overlapping or non-overlapping. We do not impose any specific mechanism to check for this, and CogVLM does not bias toward any particular case, as far as we have observed. While controlling when the model generates overlapping versus non-overlapping objects could be useful, we believe that this goes beyond the scope of our current work.\\n\\nIn general, how often and accurately a model generates overlapping objects depends largely on its training data. If the training data predominantly contains non-overlapping layouts, the model is less likely to generate accurate overlapping compositions. This is a common challenge with pre-trained generative models and is not specific to our method. Nevertheless, as shown in Figure 9, there are still many instances where overlapping objects are generated successfully.\\n\\n> **Do we control the noise by fixing the seed? Why is there a preferency over certain seeds in Section 3.3?**\\n\\nYes, fixing the random seed effectively fixes the input noise for diffusion models. In principle, our paper highlights that certain noise patterns can be exploited to enhance model performance. In practice, we use the initial seeds as a straightforward and reproducible way to control the noise. Each seed uniquely determines a specific noise pattern, and with a fixed seed, the model consistently starts from the same deterministic noise rather than resampling. This leads to consistent positional arrangements and explains the observed preference for certain seeds in Section 3.3.\"}", "{\"summary\": \"In this paper, the author explores the random noise for the diffusion-based generation, especially for the text-to-image generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea is easy to follow.\\nThe problem is well-designed. \\nThe finding is interesting to many diffusion users.\", \"weaknesses\": \"1. Section 3.2 are not well-proved for the correlation. I am not convinced about the heatmap results.\\n- How do you decide the correct / incorrect in Figure 4? Do this process bring the bias or prefrency over the distribution?\\n- Which layer is used for the heatmap? the output of diffusion model before VAE decoder? \\n- The four coins can be parallel or any position arrangement. So why the heatmap in Figure 4 is coincidently splited the 4 grids?\\n\\n2. More compositional generation results and failure cases\\nHow to generate the partially overlapped objects? \\nThe samples showed are almost no overlapped. \\n\\n3. The definition of seed. \\nSo you just fix the seed rather than the noise? \\nEverytime we will resample the noise according to the seed? \\nSo why there will be a preferency over certain seed in Section3.3?\\n\\n4. Minor problem\\nWhat are the \\\"these images\\\" in abstract? You may training images, which is collected by you? Please specify it. \\n\\n5. Scalability to unseen prompts.\\nHow about 7 or 8 objects? \\nHow about the ``boundary'' or ``corner''?\", \"questions\": \"Please see the Weakness for details.\\n\\nMy main concern is as follows. \\n(1) Correctness is decided by the large model CogVLM2. It will also lead to the bias, like preferring non-overlapping layout. \\nFinetuning makes the model overfitted. \\n\\n(2) Fixed System Seed. You mean the input noise is also fixed? Actually, we fixed the input noise.\\n\\n(3) Overlapped result is hard to generate.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **What are the \\\"these images\\\" in abstract**\\n\\nThank you for pointing this out. \\\"These images\\\" in the abstract means the generated images used for training. To make it clearer, we have changes \\\"these images\\\" to \\\"these generated images\\\".\\n\\n\\n> **Scalability to out-of-scope prompts/tasks**\\n\\nThank you for this interesting question. In principle, our method obtains the largest improvement when seed mining is conducted on the same task as that of the text prompts for testing. Nevertheless, our fine-tuned models demonstrate a good ability to generalize and improve the compositional accuracy for out-of-scope but related tasks (e.g., composing more than 6 objects in our case).\\n\\nTo evaluate adaptability to unseen tasks, we tested our method on two new datasets: (1) numerical prompts with required quantities varying from 7 to 8, and (2) multi-category numerical compositions (such as \\\"two cups and four spoons\\\"). To test our method on multi-category numerical composition, we created a test set of 600 prompts, featuring 10 category pairs with diverse backgrounds and numerical instructions (see Appendix A.6.2). \\n\\nAs shown in Appendix A.6.2, A.6.3 and the two tables below, our fine-tuned models extend their improvements to instructions that were never seen during seed mining or fine-tuning. \\n\\n**Table 1: Accuracy comparison when generating 7 and 8 objects**\\n\\n| Method | Acc \\u2191 (Avg) | MAE \\u2193 (Avg) | Acc \\u2191 ( 7 ) | MAE \\u2193 ( 7 ) | Acc \\u2191 (8) | MAE \\u2193 (8) |\\n|-----------------------------------------|---------------|---------------|-------------|-------------|-------------|-------------|\\n| Stable Diffusion 2.1 | 8.7 | 3.27 | 8.3 | 2.85 | 9.2 | 3.68 |\\n| + fine-tuning (reliable + rectified) | **16.7** | **1.97** | **10.0** | **1.79** | **23.3** | **2.15** |\\n|-----------------------------------------|---------------|---------------|-------------|-------------|-------------|-------------|\\n| PixArt-alpha | 5.8 | 3.76 | 3.3 | 3.73 | **8.3** | 3.78 |\\n| + fine-tuning (reliable + rectified) | **8.3** | **3.21** | **9.2** | **3.02** | 7.5 | **3.40** |\\n\\n\\n**Table 2: Accuracy comparison for multi-category positional composition.**\\n\\n| Method | Acc \\u2191 |\\n|-------------------------------------------|--------|\\n| Stable Diffusion 2.1 | 10.0 |\\n| + sampling with reliable seeds | 11.5 |\\n| + fine-tuning (reliable + rectified) | **15.7**|\\n|-------------------------------------------|--------|\\n| PixArt-alpha | 12.8 |\\n| + sampling with reliable seeds | 14.8 |\\n| + fine-tuning (reliable + rectified) | **16.5**|\\n\\n\\n[1] Liu, Haotian, et al. \\\"Visual instruction tuning.\\\" Advances in neural information processing systems 36 (2024).\\n\\n[2] Hertz, Amir, et al. \\\"Prompt-to-prompt image editing with cross attention control.\\\" arXiv preprint arXiv:2208.01626 (2022).\\n\\n[3] Chen, Minghao, Iro Laina, and Andrea Vedaldi. \\\"Training-free layout control with cross-attention guidance.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.\"}", "{\"comment\": \"Thank you. Could you also provide a qualitative comparison for the multiple category numerical composition case? Visual comparison against SD and Ranni should suffice (a similar diagram as in Fig 8 of A.7)\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your valuable feedback and for raising your rating! As per your suggestion, we will include two tables in the revised version. We sincerely appreciate your time and effort in reviewing our paper.\"}", "{\"comment\": \"Thank you for the suggestion! We have added another qualitative comparison between SD and Ranni for multiple-category numerical composition in Figure 11 in the new revision. If further comparisons are needed, we are happy to provide them.\"}", "{\"comment\": \"We appreciate the reviewer's detailed feedback and their positive assessment of our method's novelty, efficiency and experimental validation. Below we address each point raised in the Weaknesses and Questions sections:\\n\\n> **The reliance on selected seeds may limit the diversity of generated outputs**\\n\\nThank you for raising this important point, as it aligns closely with a key contribution in our paper. We specifically address concerns about limiting model diversity by adopting a novel fine-tuning scheme that modifies only the \\\"Query\\\" and \\\"Key\\\" projection layers in the attention modules. This approach is designed to preserve the model\\u2019s inherent diversity while improving accuracy (see Table 9 for more details). As shown in Table 4, our method achieves a Recall score of 71.3, which is remarkably close to the original model\\u2019s score of 76.7, indicating that diversity is largely maintained. In contrast, other methods, such as LMD (Recall = 49.4), MultiDiffusion (Recall = 36.2), and Ranni (Recall = 46.6), show significantly reduced diversity. Additionally, if diversity were not a major concern, our approach could achieve even higher accuracy, as demonstrated in Table 9.\\n\\n\\n> **Lack of an automatic, inference-time method to select reliable seeds for generating accurate compositions. Will the authors consider developing an algorithm to dynamically choose seeds based on prompt characteristics?**\\n\\nThank you for raising this important point. We would like to clarify that our seed mining method is indeed automatic via utilizing CogVLM. Perhaps the underlying concern here is the efficiency and practicality of applying this approach on-the-fly for a specific testing prompt. We acknowledge that, in its current implementation, the process can be somewhat slow for such use cases, as it involves generating hundreds of images to evaluate the reliability of a random seed for a given prompt. This being said, this approach remains feasible for more efficient approaches, such as one-step diffusion models, where the computational overhead is significantly reduced. Additionally, we find the reviewer\\u2019s suggestion of a direct mapping from text embeddings to reliable noise patterns highly intriguing. This could indeed streamline the seed selection process and improve efficiency. However, we believe that collecting the paired training data and training such a model would require substantial effort, making it an excellent avenue for future work. We are excited by the potential of this direction and would be happy to pursue it in follow-up research, building on the foundations established in this study. Thank you for highlighting this compelling opportunity.\\n\\n\\n> **Potential decline in image quality is unexplored. Are there measures to prevent potential degradation in image quality or unintended biases? How do the authors ensure that the self-generated dataset maintains high quality compared to real-world or externally validated datasets?**\\n\\nThank you for raising this concern. A drop in image quality is indeed a challenge for all methods aiming to improve model fidelity, including ours. However, as shown in Table 4, our method achieves a much smaller loss in aesthetic scores\\u2014a proxy for image quality\\u2014compared to state-of-the-art methods, while still improving accuracy, as can be seen from Table 4:\\n\\n **Aesthetic Scores:** SD: 5.19, LMD: 4.65, MultiDiffusion: 4.40, Ranni: 4.43, Ours: 5.13.\\n\\nThis improvement is largely due to our fine-tuning scheme, which modifies only a small part of the model, preserving the overall image quality. In contrast, as shown in Table 9, fine-tuning all model parameters significantly degrades image quality (Aesthetic Score: 4.12).\\n\\n\\n> **The approach assumes that data generated with reliable seeds is of higher quality for model fine-tuning, but lacks empirical comparisons with real-world datasets.**\\n\\nWe would like to clarify that our approach does not claim that synthetic data is inherently better than real-world data. Instead, our focus is on the economic and practical benefits of using self-generated data, which provides a cost-effective way to fine-tune models, especially when high-quality real-world datasets are scarce or difficult to obtain. While empirical comparisons with real-world datasets can be interesting, a direct apples-to-apples comparison is non-trivial since a sufficiently diverse set of high-quality real-world images for specific prompts, such as \\\"four coins,\\\" is not always readily available or easy to assemble. If the reviewer has a specific real-world dataset in mind that could be suitable for comparison, we would be happy to explore it and provide further insights.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your thoughtful feedback and for raising your rating. We appreciate your acknowledgment of the clarified points and the work's relevance to the community. We remain at your disposal for any further requests.\"}", "{\"title\": \"Thank you\", \"comment\": \"The author has addressed my concerns, especially on occluded samples.\\n\\nThe main idea is easy to follow. \\n\\nThe technical contribution is weak, but it will interest most of our community if published.\\n\\nI raised my rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"metareview\": \"The authors present a seed-mining strategy for enhancing reliable compositional text-to-image generation. The rebuttal addressed several key concerns, leading two reviewers to raise their scores. Overall, the reviewers expressed positive sentiments about the paper, though some noted that the technical contributions could have been stronger. Additionally, the proposed approach demonstrates improved compositional capabilities, but this comes at the expense of diversity and image quality.\\n\\nDespite these limitations, the paper provides compelling empirical results and contributes valuable insights to the field. Based on the reviewers' assessments and the discussion, the AC panel has decided to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer ZEiT's weakness comments were addressed by the reviewers and the score was increased.\\n\\nReviewer uQGW asked for comparison with Ranni (CVPR'24) approach and qualitative comparison for the multiple category numerical composition case, which were provided by the authors and the score was increased. \\n\\nReviewer MsjD is yet to participate in the discussion, though there were several important questions asked by the reviewer.\"}", "{\"comment\": \"The authors have addressed all my questions. Especially my concern about comparison against Ranni and the image resolutions issue. Further, they also included the visuals that I had asked for. I am raising my rating as I am satisfied with their response.\\n\\nHowever, I would strongly advice the authors to maintain two tables in the camera-ready version of the paper, one for 512x512 and 768x768 if it is difficult to implement LDM and Multi-Diffusion on 768x768. Two tables will make the comparisons fairer.\"}", "{\"comment\": \"> **LMD/MultiDiffusion output images at 512\\u00d7512 while the method outputs images at 768x768?**\\n\\nThank you for pointing this out. The main reason is that LMD and MultiDiffusion are not available for the 768\\u00d7768 resolution models, and it is not trivial to reimplement these methods on higher-resolution models due to their complexity. To provide additional insights, we re-implemented our method on the 512\\u00d7512 version of Stable Diffusion 2.1 and report the results in Appendix A.6.2, which show consistent performance trends; our approach continues to excel in numerical composition and recall, while LMD and MultiDiffusion show strengths in spatial composition. \\nWe provide qualitative comparisons in Appendix A.7, which demonstrate the evident visual artifacts in images generated by LMD and MultiDiffusion although the aesthetic score metric fails to reflect them. \\nAdditionally, it is worth noting that our proposed seed selection strategy is a distinct and orthogonal approach to inference-time methods such as LMD and MultiDiffusion. It is simple, easily adaptable, and could potentially be integrated with these methods in future work to leverage the strengths of both.\\n\\n\\n\\n| Method | **Numerical** | | | **Spatial** | | |\\n|----------------------------------------------|-----------------------------|-----------|-----------|-----------------------------|-----------|-----------|\\n| | Acc \\u2191 | Aes \\u2191 | Rec \\u2191 | Acc \\u2191 | Aes \\u2191 | Rec \\u2191 |\\n| Stable Diffusion 2.1 (512\\u00d7512) | 34.0 | 4.39 | 71.9 | 16.9 | 4.55 | **74.7** |\\n| + sampling with reliable seeds | 37.0 | 4.39 | **76.7** | 18.8 | 4.44 | 70.9 |\\n| + fine-tuning (random) | 33.7 | 3.98 | 70.4 | 19.1 | 4.19 | 66.4 |\\n| + fine-tuning (reliable) | 37.3 | 3.93 | 72.8 | 23.4 | 4.11 | 69.7 |\\n| + fine-tuning (random + rectified) | 40.7 | 3.94 | 70.3 | 25.2 | 3.93 | 62.4 |\\n| + fine-tuning (reliable + rectified) | **43.2** | 3.99 | 71.2 | 25.6 | 3.86 | 62.0 |\\n| + LMD [Lian et al., 2023] | 35.8 | **4.65** | 49.4 | **51.9** | **4.77** | 44.2 |\\n| + MultiDiffusion [Bar-Tal et al., 2023] | 29.2 | 4.40 | 36.2 | 51.4 | 4.19 | 39.6 |\\n\\n\\n\\n> **How does our method perform in the case of multiple-category numerical composition?**\\n\\nThank you for this interesting suggestion regarding multi-category scenarios (e.g., \\\"2 airplanes and 4 birds\\\"). These scenarios are significantly more challenging than single-category cases.\\nTo test this scenario, we created a test set of 600 prompts, featuring 10 category pairs with diverse backgrounds and numerical instructions (see Appendix A.6.2).\\nOur evaluation in the table below shows that all existing methods, including ours, face difficulties in such conditions. Note that we did not re-train our method to tailor to this scenario but used the same model fine-tuned for single-category scenarios without any modifications.\\nAs can be seen in the table below, our approach achieves a 5.7% increase in accuracy compared to the original SD model and outperforms Ranni by 3.4%. It also improves Pixel-\\u03b1 by 3.7%.\\n\\n| Method | Acc \\u2191 | Aes \\u2191 | Rec \\u2191 |\\n|-------------------------------------------|--------|--------|--------|\\n| Stable Diffusion 2.1 (768\\u00d7768) | 10.0 | **5.06**| **71.4**|\\n| + sampling with reliable seeds | 11.5 | 5.08 | 68.6 |\\n| + fine-tuning (reliable + rectified) | **15.7**| 4.91 | 61.6 |\\n| + Ranni [Feng et al., 2024] | 12.3 | 4.46 | 41.8 |\\n|-------------------------------------------|--------|--------|--------|\\n| PixArt-\\u03b1 | 12.8 | **5.10**| **81.7**|\\n| + sampling with reliable seeds | 14.8 | 4.88 | 64.7 |\\n| + fine-tuning (reliable + rectified) | **16.5**| 4.86 | 67.6 |\\n\\n\\n\\n[1] Feng, Yutong, et al. \\\"Ranni: Taming text-to-image diffusion for accurate instruction following.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n---\\n\\nWe hope these clarifications and additional experiments address the reviewer\\u2019s concerns. If further clarifications are needed, we are happy to provide them.\"}", "{\"summary\": \"This paper tackles two main aspects of diffusion models: Numerical and spatial generations. The aim is to use the diffusion model as is without additional inputs such as layouts. First, reliable seeds are mined which produce correct results for the numerical and spatial generations. Then, these seeds are used to create a generative dataset and the model is fine-tuned on this dataset to improve performance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The main advantage of this work is that no additional modules/trainable parameters need to be added to the diffusion model which incorporate layouts or bounding boxes like other works usually do.\\n2. Extensive experimentation is conducted to validate the reliable seeds hypothesis.\\n3. Once reliable seeds are mined, the authors have experimented with a broad spectrum of ways to use that to enhance the model's performance.\", \"weaknesses\": \"1. Baselines: Newer methods to accomplish this task have developed such as [1] after LMD, which show an improvement over LMD. This work should be compared with [1] instead of LMD to demonstrate the efficacy of this approach.\\n\\nI have clubbed the other points in the questions section\\n\\n---\\n[1] Feng, Yutong, et al. \\\"Ranni: Taming text-to-image diffusion for accurate instruction following.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"1. What are the numbers when compared to that of [1]?\\n\\n---\\n\\n2. Comparison against baselines: As 512x512 implementation of Stable Diffusion is used for LMD and Multi Diffusion, comparing it with 768x768 version becomes unfair. What are the numbers for Table 4 when using the 512x512 Stable diffusion of this method instead of the 768x768?\\n\\n---\\n\\n3. Mixture of objects for numerical compositions. All the results seem to display numerical compositions of a single object. How are the results when I compose multiple objects, such as \\\"2 airplanes and 4 birds in the sky\\\", and how do the baselines compare with this method for such cases?\\n\\n---\\n---\\nI will reconsider my rating if these concerns are addressed.\\n\\nPlease correct me if you think I have misunderstood any aspect of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the challenges faced by text-to-image models in handling compositional prompts, such as accurately rendering object quantities and spatial relations. It highlights the impact of initial random seeds on the arrangement and fidelity of generated images, proposing a method to improve model performance by identifying and leveraging \\u201creliable seeds.\\u201d The paper\\u2019s main contributions include: 1) a generation strategy based on reliable seeds to reduce the need for manual annotations by automatically generating a high-quality dataset; 2) fine-tuning the model on self-generated reliable data to enhance numerical and spatial compositional accuracy; and 3) implementing a seed-based sampling strategy that improves generation accuracy without additional computation or training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Provides a novel, data-efficient method to improve compositional accuracy in text-to-image generation by harnessing seed variability.\\n2. The automatic generation of a training dataset with reliable seeds reduces the labor-intensive process of manual annotation.\\n3. Extensive quantitative and qualitative evaluations demonstrate the approach\\u2019s effectiveness in improving both numerical and spatial compositional tasks across different models.\", \"weaknesses\": \"1. The reliance on selected seeds may limit the diversity of generated outputs, as increasing accuracy through reliable seeds could restrict the model\\u2019s range of variations.\\n2. There is no method presented for automatically selecting reliable seeds during inference, limiting the approach\\u2019s applicability to other models and use cases.\\n3. Potential decline in overall image generation quality when fine-tuning on self-generated data remains unexplored, especially concerning aesthetics and real-world accuracy.\\n4. The approach assumes that data generated with reliable seeds is of higher quality for model fine-tuning, but lacks empirical comparisons with real-world datasets or alternative high-quality sources.\\n5. Limited generalization testing to other diffusion models beyond Stable Diffusion and PixArt-\\u03b1; therefore, the approach\\u2019s adaptability to diverse architectures is unclear.\", \"questions\": \"The citation format is slightly less standardized and the consistency of the references in the citation section should be ensured.\\n\\nOne key limitation is the lack of an automatic, inference-time method to select reliable seeds for generating accurate compositions. Would the authors consider developing a mechanism, such as a predictive model or algorithm, to dynamically choose reliable seeds based on prompt characteristics? This would significantly improve the model\\u2019s generalizability and practical use.\\n\\nFine-tuning on self-generated data inherently risks reducing image diversity or amplifying generation biases. Could the authors clarify how they ensured that this self-generated dataset maintains high quality compared to real-world or externally validated datasets? Additionally, what safeguards are in place to prevent potential degradation in image quality or unintended biases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
5BRFddsAai
HASARD: A Benchmark for Vision-Based Safe Reinforcement Learning in Embodied Agents
[ "Tristan Tomilin", "Meng Fang", "Mykola Pechenizkiy" ]
Advancing safe autonomous systems through reinforcement learning (RL) requires robust benchmarks to evaluate performance, analyze methods, and assess agent competencies. Humans primarily rely on embodied visual perception to safely navigate and interact with their surroundings, making it a valuable capability for RL agents. However, existing vision-based 3D benchmarks only consider simple navigation tasks. To address this shortcoming, we introduce **HASARD**, a suite of diverse and complex tasks to **HA**rness **SA**fe **R**L with **D**oom, requiring strategic decision-making, comprehending spatial relationships, and predicting the short-term future. HASARD features three difficulty levels and two action spaces. An empirical evaluation of popular baseline methods demonstrates the benchmark's complexity, unique challenges, and reward-cost trade-offs. Visualizing agent navigation during training with top-down heatmaps provides insight into a method's learning process. Incrementally training across difficulty levels offers an implicit learning curriculum. HASARD is the first safe RL benchmark to exclusively target egocentric vision-based learning, offering a cost-effective and insightful way to explore the potential and boundaries of current and future safe RL methods. The environments and baseline implementations are open-sourced.
[ "reinforcement learning", "AI safety", "safe RL", "constrained RL", "benchmark", "vizdoom", "3D", "difficulty levels" ]
Accept (Poster)
https://openreview.net/pdf?id=5BRFddsAai
https://openreview.net/forum?id=5BRFddsAai
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z6XqcT9sCX", "ujJAlbNKEQ", "svL9lk8d0y", "q6bZhVaOLy", "oUcCzqNyAc", "oSrBJkrA7q", "lR1ojuMRtX", "jYhWT8bnG0", "iqY55OPbXO", "eomoI6iHmM", "eQuCqxOw7B", "e7PWGYHtVR", "XzfrpTkHPs", "VaNagg5LLl", "SFbIV15RZA", "RPwKV96RHN", "QAIHgC2CWn", "OyKuYrYpNx", "OvTk8vK9bV", "JYSdHpNxR1", "Ha0dOXXfPV", "GzHKocSDEs", "FKz8NuYphS", "Ao8TmYT9kX", "AIWGWPEIlU", "9ixF7aYzIA", "8HiABx23Dt", "7pKuav009D" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "meta_review" ], "note_created": [ 1732469455822, 1732620396963, 1732418262881, 1732586990355, 1732418653141, 1732418049892, 1731605505706, 1730723176606, 1732138080697, 1731605461636, 1732418774396, 1733273653803, 1730667866138, 1732527370826, 1732192565115, 1732418478748, 1733275145601, 1733273758790, 1732188499301, 1733274552162, 1732508872139, 1733273229428, 1732599232974, 1729488349936, 1733274164144, 1730710315709, 1737524065673, 1734725420001 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_Rd3K" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_NtcD" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_FzXY" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_FzXY" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_Rd3K" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_NMGZ" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_NMGZ" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_NtcD" ], [ "ICLR.cc/2025/Conference/Submission10610/Authors" ], [ "ICLR.cc/2025/Conference/Submission10610/Reviewer_NMGZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10610/Area_Chair_CS2t" ] ], "structured_content_str": [ "{\"title\": \"Response to rebuttal\", \"comment\": \"Dear authors,\\n\\nThank you for attempting to address the comments from all the reviewers. Unfortunately, I am still not convinced about the significance of the proposed benchmark, the applicability of algorithms tested on it to other more realistic scenarios where safety would be actually meaningful, and I agree with comments by other reviewers regarding lack of physical realism in VizDoom and the lack of relevant modern baselines. \\n\\nAs such, I am not able to support recommend accepting the paper.\"}", "{\"comment\": \"I have not seen the author's response. I concur with the other reviewers that VisDoom lacks a detailed simulation of real-world physics.\"}", "{\"comment\": \"**W3. Task Complexity**\\n\\n**Focus** \\nWhile we acknowledge that environment suites like Safety-Gymnasium already offer embodied perception environments with more sophisticated physics simulation, our aim is to expand the scope beyond basic navigation and continuous robotic motor control tasks. Rather than competing directly with suites like Safety-Gymnasium, HASARD serves as a complementary benchmark. While safe behavior in realistic physics and lifelike environments is undoubtedly an important direction in the field, HASARD emphasizes fostering higher-order comprehension and decision-making. \\n\\nThe reviewer has raised an important point about the sacrifice of realism - indeed there is a trade-off between the computational efficiency and simulation complexity. HASARD caters for complexity not in terms of the already widely-explored complex continuous control problems, but the ability to perceive, reason, and plan. all the while being magnitudes faster than the complex physics-based simulation environments such as Safety-Gymnasium. This allows researchers to explore other aspects of safe RL at a fraction of the computational cost, especially with the sample-hungry model-free approaches.\\n\\n**Action Space** \\nRegarding the action space, we've introduced 2 settings. We conducted our main experiments using the simplified version of the action space (ranging up to 54 multi-discrete actions, depending on the task). This was done to demonstrate the (1) solvability of the tasks, (2) various reward-cost trade-offs, (3) learning stage analysis, and (4) complexity across difficulty levels. However, the core setting of HASARD is meant to incorporate the full action space of DOOM, featuring **864** multi-discrete actions combined with 2 continuous actions. This significantly raises the difficulty even in Level 1, as we showed in Appendix B.1. \\n\\nIt is arguable whether the continuous action spaces for Safety-Gymnasium agents are more challenging. The Point, Car, and Racecar agents operate with 2 continuous actions (steering and acceleration), Ant uses 8 continuous joint torques, and Doggo has 12 continuous joint torques. Even if this were the case, HASARD does not directly aim to compete with such settings, instead targeting other domains than continuous motor control. The simplified action setting in HASARD is meant to streamline development by enabling faster analysis, quicker iteration, and immediate feedback. It allows researchers to test and refine methods efficiently, focusing on task dynamics without the computational overhead of the full-action space.\\n\\n**Hard Constraints** \\nWe appreciate the reviewer for addressing this. We wish to clarify that introducing hard constraints is not a novel contribution of our work, nor is it a central focus of this paper, as similar setups can be implemented in other libraries with little ease. Do note, however, that our modifications vary between tasks and go beyond only lowering the cost threshold to zero. For example, in *Armament Burden*, exceeding the carrying capacity results in the loss of all previously collected weapons. The aim of the hard constraint setting is to provide a wider variety of evaluation settings, due to various safety formulations. We will reduce the emphasis and claims regarding the hard constraints in the writing.\\n\\nThe responses to the following weaknesses and questions will further describe the challenges of the HASARD scenarios.\"}", "{\"comment\": \"Thanks for the reply. Considering that the contribution of this paper is incremental, I decided to increase my score. However, the author did not address my biggest concern and try to use the latest algorithm to run the proposed benchmark environment during the rebuttal period, so it's still unclear whether the current algorithm has solved the problem proposed by this benchmark.\"}", "{\"comment\": \"**Q1. Which Tasks Require Memory and Planning?**\\n\\nIn **Armament Burden**, the agent needs to memorize which items it has already collected to estimate how many more, and of what type, it is reasonable to gather before returning to the delivery zone. There is no direct indicator of the carried weight from a raw observation. The only feedback is the slowdown the agent experiences once the capacity is exceeded. When setting off from the starting zone, the agent should plan out a route of which items in which order to obtain to maximize its reward in a limited time. The weapon types and their locations are random each time, presenting a unique setting at every start. Due to partial observability, especially in higher levels, where the entire environment and obtainable items are not visible from the starting zone, it is advantageous for the agent to memorize the locations of items it has seen but not obtained in preceding runs during the same episode.\\n\\nIn **Collateral Damage**, the movement and velocity of enemy and neutral units can not be inferred from a single frame, necessitating a capacity for short-term memory. Each entity type exhibits different movement patterns that can be anticipated to some extent. By accurately perceiving depth and predicting future positions of the entities, the agent can plan its actions and time its rocket launches effectively to hit targets while avoiding collateral damage to neutral units.\\n\\nSimilar to predicting the sporadic movements of units in *Collateral Damage*, the agent must anticipate unit trajectories in **Detonator's Dilemma**, accompanied by a greater variety in unit behaviors and a more complex terrain. With barrels scattered across the map and both neutral units and barrels respawning after elimination, the agent benefits from maintaining a memory of prior detonations and unit locations to optimize its strategy. For instance, if the agent observes that its immediate area has numerous barrels but is heavily crowded with neutral units, it might strategically navigate to a less populated area to detonate barrels there. By retaining this memory, the agent can later return to the original location when conditions are more favorable.\\n\\nWe will further expand the environment descriptions in Appendix A to incorporate these details. If the reviewer feels that our claims regarding memory and planning capabilities are overstated or artificial, we are open to revising the emphasis.\\n\\n\\n**W1. Outdated Baselines**\\n\\nWe appreciate the reviewer\\u2019s concern regarding the use of recent baselines. The baseline methods we selected were chosen for their robustness and proven track record across a variety of prior benchmarks. While these methods may not represent the absolute latest developments, they are well-tested and reliable for evaluating the complexity and nuances of HASARD. That said, we acknowledge that integrating more recent vision-based safe RL methods could enhance the relevance of HASARD as a benchmark.\\n\\nWhile Lambda, Safe SLAC, and SafeDreamer have public author-provided implementations, they are not yet integrated into widely-used Safe RL libraries such as SSA [1], FSRL [2], SafePO [3], and OmniSafe [4], which could streamline their adoption. Adapting them to new environments like HASARD requires substantial effort in terms of implementation adjustments.\\n\\nAdditionally, we encountered compatibility issues during our attempts to incorporate these methods. For instance, SafeDreamer relies on JAX [5], which is well-optimized for environments natively written in JAX but presents challenges when integrating with non-native environments. Similarly, neither Lambda nor Safe SLAC could be installed directly using the dependencies listed in their respective repositories. Despite extensive efforts and manual adjustments, we were unable to get these methods to train effectively.\\n\\n[1] Ray, Alex, Joshua Achiam, and Dario Amodei. \\\"Benchmarking safe exploration in deep reinforcement learning.\\\" arXiv preprint arXiv:1910.01708 7.1 (2019): 2.\\n\\n[2] Liu, Zuxin, et al. \\\"Datasets and benchmarks for offline safe reinforcement learning.\\\" arXiv preprint arXiv:2306.09303 (2023).\\n\\n[3] https://github.com/PKU-Alignment/Safe-Policy-Optimization\\n\\n[4] Ji, Jiaming, et al. \\\"Omnisafe: An infrastructure for accelerating safe reinforcement learning research.\\\" Journal of Machine Learning Research 25.285 (2024): 1-6.\\n\\n[5] https://github.com/PKU-Alignment/SafeDreamer\"}", "{\"comment\": \"**W2. Solvability by Existing Algorithms**\\n\\n**Are the tasks solved?** \\nThe reviewer has rightfully pointed out the lack of a performance upper bound, without which we cannot determine whether the tasks are already solved. Regular PPO that neglects costs could set a reasonable upper bound for reward, but we are really interested in the highest reward obtainable while adhering to the safety constraints. In our attempt to address this issue, we have included a **human baseline**, similar to the Atari benchmark. A human player was evaluated over 10 episodes on Level 3 of each task, recording both reward and cost. In the following table, we compare the human baseline to the Level 3 main results in the paper obtained with the simplified action space. As noted in Appendix B.1, this had more favorable outcomes than with agents attempting to leverage the full action space. The human player, however, had access to the more complicated original full action space.\\n\\n|Method|AB R\\u2191|AB C\\u2193|VV R\\u2191|VV C\\u2193|RR R\\u2191|RR C\\u2193|CD R\\u2191|CD C\\u2193|PP R\\u2191|PP C\\u2193|DD R\\u2191|DD C\\u2193|\\n|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n|PPO|1.99|118.22|42.20|347.77|53.16|68.27|34.86|84.44|487.17|894.22|49.36|23.72|\\n|PPOCost|*0.04*|*0.05*|*10.61*|*14.76*|*0.01*|*0.02*|*3.93*|*1.35*|*15.39*|*6.64*|49.95|19.79|\\n|PPOLag|2.03|*31.89*|22.77|52.54|8.02|9.74|12.22|5.29|23.87|*34.98*|*19.27*|5.26|\\n|PPOSaut\\u00e9|2.32|*37.68*|18.89|246.80|*1.17*|*3.50*|*2.74*|*2.76*|101.64|176.14|22.20|10.36|\\n|PPOPID|*2.78*|*33.89*|*25.80*|*49.02*|13.51|5.14|*14.51*|*4.97*|*54.02*|*49.57*|*20.49*|*4.87*|\\n|P3O|*2.61*|*29.94*|*24.93*|*49.84*|*14.29*|*4.19*|13.57|*4.12*|269.14|428.72|26.73|7.49|\\n|Human|***8.03***|*46.43*|***38.06***|*41.12*|***34.21***|*3.17*|***19.43***|*4.77*|***147.32***|*47.45*|***43.90***|*4.02*|\\n\\nThe results indicate that none of the safe RL baselines have achieved human-level performance, suggesting that the tasks are still far from being solved. It is worth noting that the human baseline is suboptimal, leaving ample room for improvement. The sole purpose of this baseline is to show the potential for improved performance, considering also that the human player has privileged knowledge of how the game works. The RL agent has to learn this from pixel observations.\\n\\n**Why are baselines incapable?** \\nIn many instances, the human player reached a higher score not due to more precise control, but through strategies that none of the evaluated baselines managed to learn. After all, while the RL method can consistently output precise actions for each frame, human outputs are inherently noisier.\\n\\nIn **Armament Burden**, the agent has the option to discard all weapons. This strategy can be used to get rid of the very heavy weapons far from the starting zone, since carrying them back can be very costly. None of our evaluated baselines managed to learn this behavior. Instead, they simply avoided picking up heavy weapons altogether, as discarding weapons does not provide an immediate or obvious reward. Level 3 introduces decoy items that add to the carrying load but do not grant any reward when delivered. None of the methods were able to figure out how to avoid these items.\\n\\nIn **Precipice Plunge**, the agent has the option to restart the episode, if it decides that no safe actions are possible. This becomes very relevant in Level 3, as the height differences to all surrounding blocks might be too great to permit a safe leap. However, again none of the agents learned to utilize this. Instead, they either remained stuck on a block until the end of the episode or ended up making an unsafe action.\\n\\nIn Levels 2 and 3 of **Remedy Rush**, the environment becomes periodically dark after short time intervals, limiting the agent's ability to see the items it needs to collect or avoid. However, night vision goggles are located in the map, granting permanent vision when obtained. Despite this, none of the baselines developed a consistent strategy to seek out the goggles early in the episode before proceeding with their item collection.\\n\\nLevel 3 of **Detonator's Dilemma** allows the agent to push the barrels to safer locations with few or no units nearby before detonating them. However, since pushing barrels does not yield immediate rewards, the baseline methods failed to adopt this strategy. Instead, the agents tend to avoid going near the barrels, as detonating them while standing too close often results in the agent harming itself and incurring a cost.\"}", "{\"comment\": \"**W3 & Q2. ViZDoom Lacks Realism**\\n\\nWhile we acknowledge that engines like Isaac Gym offer advanced physics simulation, our aim is to expand the scope not only beyond conventional 2D environments (listed as a strength by the reviewer), but also continuous control robotic manipulation tasks. Rather than focusing on realistic physics or lifelike environments, our emphasis is on fostering decision-making in visually intricate settings. Notably, many recent studies in safe RL continue to utilize simplified, unrealistic environments for experimental evaluation [2, 3], indicating their ongoing relevance. ViZDoom, in particular, remains widely adopted in recent research [4, 5, 6, 7]. Appendix F of the paper provides further rationale for our choice of ViZDoom.\\n\\nFigure 18b in Appendix G depicts the learning curves for Level 3 tasks. Training has not fully converged across multiple methods even after 5e8 environment iterations, underscoring the complexity of these environments and the high iteration count needed to approach peak performance. Despite the complexity of the high-dimensional pixel-based inputs, the framework was able to simulate 500M frames and perform 150K policy updates in a training time of ~3 hours on a single GPU. This complexity doesn\\u2019t stem from advanced physics or intricate motor control but instead from demands like precise navigation, spatial awareness, depth perception, and predicting the movement of other entities. Combining these challenges with accurate physics and continuous motor control would make the problem unfeasible to solve within a reasonable timeframe on standard hardware. Built on top of ViZDoom, HASARD therefore offers a balanced approach, blending fast simulation with complex tasks that remain meaningfully challenging.\\n\\n\\n**W4. Inaccessible Video**\\n\\nWe tried accessing the video hosted on [emalm](https://emalm.com/?v=dGLYX) from multiple devices and did not encounter any issues. To ensure the reviewer can view the demo, we have also re-uploaded the video to the [jumpshare](https://jumpshare.com/v/abSD9rLLXQDdkIXkQ3c7) platform. Please let us know if further adjustments are needed.\\n\\nWe thank the reviewer for their feedback! Please let us know if there is anything else we ought to address.\\n\\n[1] Wachi, Akifumi, Xun Shen, and Yanan Sui. \\\"A Survey of Constraint Formulations in Safe Reinforcement Learning.\\\" arXiv preprint arXiv:2402.02025 (2024).\\n\\n[2] Wachi, Akifumi, et al. \\\"Safe exploration in reinforcement learning: A generalized formulation and algorithms.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Den Hengst, Floris, et al. \\\"Planning for potential: efficient safe reinforcement learning.\\\" Machine Learning 111.6 (2022): 2255-2274.\\n\\n[4] Park, Junseok, et al. \\\"Unveiling the Significance of Toddler-Inspired Reward Transition in Goal-Oriented Reinforcement Learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 1. 2024.\\n\\n[5] Kim, Seung Wook, et al. \\\"Neuralfield-ldm: Scene generation with hierarchical latent diffusion models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\\n\\n[6] Zhai, Yunpeng, et al. \\\"Stabilizing Visual Reinforcement Learning via Asymmetric Interactive Cooperation.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[7] Valevski, Dani, et al. \\\"Diffusion models are real-time game engines.\\\" arXiv preprint arXiv:2408.14837 (2024).\"}", "{\"summary\": \"The paper presents HASARD, a benchmark tailored to vision-based safe reinforcement learning (RL) using egocentric, pixel-based inputs. Built on the ViZDoom platform, HASARD comprises six unique 3D environments across three difficulty levels, each designed to test safe RL in increasingly complex and dynamic scenarios. The benchmark allows for a range of agent objectives, from navigation to item collection and hazard avoidance, focusing explicitly on embodied safe RL with vision-based inputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Diverse Scenarios: HASARD provides varied 3D environments with different objectives and challenges, such as item collection, navigating hazardous terrain, and avoiding neutral units. This variety enriches the learning and testing possibilities, ensuring that the benchmark assesses both task performance and safety considerations.\", \"structured_curriculum\": \"By offering three difficulty levels, HASARD presents a built-in curriculum for training RL agents, allowing gradual learning in increasingly challenging conditions. This approach is effective for developing robust agents that can generalize to new, more complex scenarios.\", \"weaknesses\": \"Outdated Baselines: All the baseline algorithms were published over two years ago, and the original implementations of these baselines do not support visual inputs. The lack of SOTA vision-input baselines, such as Lambda [1], Safe SLAC [2], and SafeDreamer [3], limits the benchmark\\u2019s relevance in evaluating current state-of-the-art safe RL methods.\", \"solvability_by_existing_algorithms\": \"Have the tasks introduced in this framework already been solved by existing algorithms? For instance, can PPO-PID successfully address these tasks? Are there settings within HASARD that current algorithms struggle to handle? By not including experiments with the latest baselines, it is unclear whether the HASARD benchmark will drive the development of new algorithms or simply reaffirm existing solutions.\", \"task_complexity\": \"What is the primary contribution of HASARD compared to existing safety benchmarks, such as Safety Gymnasium? Compared to Safety Gymnasium, HASARD primarily adds hard constraints and fast simulation. However, implementing hard constraints is relatively straightforward, merely requiring a single line of code to terminate the episode upon any unsafe action. As for fast simulation, HASARD achieves this by sacrificing simulation fidelity and simplifying the action space, which limits its meaningfulness as a contribution compared to Safety Gymnasium.\\n\\nMoreover, most tasks in HASARD revolve around avoiding hazardous obstacles, which has already been extensively addressed and solved in Safety Gymnasium by existing algorithms (e.g., [1-3]). Given HASARD's simplified dynamics and action space, it would need to introduce more complex tasks than those in Safety Gymnasium to stimulate the development of new algorithms. However, I did not observe any such complexity in the task design that would distinguish it from prior benchmarks.\\n\\n[1] CONSTRAINED POLICY OPTIMIZATION VIA BAYESIAN WORLD MODELS\\n\\n[2] Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation\\n\\n[3] SafeDreamer: Safe Reinforcement Learning with World Models\", \"questions\": \"1. Which tasks in HASARD require memory capabilities, and which involve long-horizon decision-making? It would be helpful if the authors could clarify how the benchmark challenges an agent\\u2019s memory and planning capabilities over extended time sequences.\\n\\n2. Why did you choose ViZDoom to build this benchmark? Does this platform offer specific advantages? From my perspective, it seems that ViZDoom allows only minor modifications to its existing game structure and may lack the flexibility to define more complex, varied tasks. Why not consider using a truly open-world environment, such as MineDojo [4], which enables safer RL environments with more sophisticated task definitions? A platform like MineDojo could potentially support a broader range of scenarios and facilitate more diverse task creation.\\n\\n3. Additionally, I noticed that you used Omnisafe for algorithm benchmarking, but this wasn\\u2019t mentioned in the paper. I have some questions regarding one of the baselines you implemented. In the P3O algorithm code (see here:https://github.com/PKU-Alignment/omnisafe/blob/main/omnisafe/algorithms/on_policy/penalty_function/p3o.py#L82), there is a term J_c in the loss function that appears to be independent of the network parameters. What effect does including J_c in the loss function have? I observed in your experimental results that P3O also fails to satisfy the constraints, which may be related to the J_c term. This raises some doubts about the effectiveness of this baseline.\\n\\n[4] MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**W1 & Q1. Unpragmatic Constraints**\\n\\nWe acknowledge the significance of safety and accurate motor control in robotics and control systems, which has been a central focus within the safe RL community. However, as the scope of safety in RL extends to broader domains beyond complex robotic motor control, including autonomous navigation [1], healthcare [2], LM safety [3], and energy systems management [4], there is a need for safe RL methods that could generalize across this broad domain spectrum. Such methods should be capable of addressing a wide range of safety constraints, thereby expanding the applicability and impact of safe RL to yet unexplored domains.\\n\\nTo support the development of such methods, we require different environments and benchmarks that facilitate this, rather than all of them narrowly tailoring to individual, pragmatic applications. Relying solely on robotic simulation suites like Safety-Gymnasium as the gold standard narrows the scope. The community could benefit from embracing a broader spectrum of meaningful environments that challenge diverse competencies and encourage generalizable solutions. \\n\\nHASARD does not directly address any of these specific application domains, nor does it attempt to cover the entire range of safety aspects, but it does extend beyond conventional 2D environments and widely used robotic control tasks. Instead of focusing on realistic physics or lifelike simulations, HASARD emphasizes fostering decision-making in visually complex and challenging settings, offering a fresh perspective for advancing the field.\\n\\nNot all constraints in HASARD are entirely disconnected from real-world settings. Although ViZDoom does not facilitate the accurate motor control required to match the complexity of the real world, it simulates the challenges of decision-making in visually intricate environments, such as perceiving depth. \\n\\nIn *Precipice Plunge*, the agent has to determine whether stepping or jumping down to a lower platform is safe by assessing whether it's close enough to avoid fall damage. This parallels the challenges faced by robots navigating complex terrains.\\n\\nIn *Armament Burden*, the agent must learn how each item contributes to its carrying load and balance this against how much weight it can carry. It should plan a good path to maximize its time efficiency. It must also factor in the distance to the delivery zone, evaluating whether the potential reward for delivering an item justifies temporarily exceeding its carrying capacity. These characteristics could be tied to logistics or supply chain optimization, such as where a robot needs to manage the warehouse inventory.\\n\\nIn *Collateral Damage* and *Detonator's Dilemma*, the agent must anticipate the future positions of other entities by analyzing their speed and past trajectories. This mirrors a similar competency in autonomous driving, where safety hinges on accurately predicting the movements of nearby vehicles and pedestrians.\\n\\nThe focus of safety here is less about precise control problems and more about strategic decision-making. While we acknowledge that these game-based environments are detached from reality, they effectively simulate high-level challenges and complexities. Recent studies in safe RL continue to utilize simplified, unrealistic environments for experimental evaluation, such as grid worlds [5, 6, 7], to investigate some very intricate aspects of their method. These simplified settings remain relevant due to their interpretability, which is often lacking in more realistic environments. Additionally, they are not directly tied to pragmatic safety constraints. In contrast, ultra-realistic and highly complex environments make it challenging to isolate specific factors.\\n\\n[1] Nehme, Ghadi, and Tejas Y. Deo. \\\"Safe Navigation: Training Autonomous Vehicles using Deep Reinforcement Learning in CARLA.\\\" arXiv preprint arXiv:2311.10735 (2023).\\n\\n[2] Cao, Junyu, Esmaeil Keyvanshokooh, and Tian Liu. \\\"Safe reinforcement learning with contextual information: Theory and application to personalized comorbidity management.\\\" Available at SSRN 4583667 (2023).\\n\\n[3] Wachi, Akifumi, et al. \\\"Stepwise alignment for constrained language model policy optimization.\\\" arXiv preprint arXiv:2404.11049 (2024).\\n\\n[4] Huo, Xiang, et al. \\\"Optimal Management of Grid-Interactive Efficient Buildings via Safe Reinforcement Learning.\\\" arXiv preprint arXiv:2409.08132 (2024).\\n\\n[5] Garc\\u0131a, Javier, and Fernando Fern\\u00e1ndez. \\\"A comprehensive survey on safe reinforcement learning.\\\" Journal of Machine Learning Research 16.1 (2015): 1437-1480.\\n\\n[6] Wachi, Akifumi, et al. \\\"Safe exploration in reinforcement learning: A generalized formulation and algorithms.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[7] Den Hengst, Floris, et al. \\\"Planning for potential: efficient safe reinforcement learning.\\\" Machine Learning 111.6 (2022): 2255-2274.\"}", "{\"comment\": \"**W1. Hard Constraints**\\n\\nThis concern is well-founded. We wish to clarify that our introduction of hard constraints is not presented as a novel contribution, nor is it central to our paper, as similar setups can indeed be implemented in other libraries. However, it is worth noting that simply decreasing the cost threshold to 0 is not our only modification. Additional task-specific adjustments have been made; for example, harsher penalties apply upon incurring costs, often terminating the episode abruptly. In *Armament Burden*, for instance, the agent loses all acquired weapons if it exceeds carrying capacity.\\n\\nOur primary focus here lies in the evaluation protocol introduced by HASARD, which emphasizes the integration and assessment of safety constraints in RL training. HASARD supports evaluations across diverse safety formulations [1], enabling fairer comparisons by aligning the safety problems that different algorithms are designed to address. If the reviewer yet nevertheless feels that the emphasis on hard constraints seems overstated or the claims appear unsubstantiated, we are open to adjust the writing to reflect this.\\n\\n\\n**W2 & Q1. Different Safety Budgets**\\n\\nUnderstanding the trade-off between safety and reward is indeed crucial for evaluating how algorithms adapt to varying safety requirements. We addressed this in Appendix B.2 (referenced in Section 5.1), where we examined the effects of different cost budgets by testing PPOLag and PPOSaut\\u00e9 on Level 1 of HASARD, with both higher and lower safety thresholds than the original setting. The results in Figure 14 nicely show that stricter safety bounds lead to lower rewards, with *PPOSaut\\u00e9* exhibiting greater sensitivity than *PPOLag*. We are open to incorporating additional safety thresholds and other baselines if the reviewer believes this would strengthen our work.\\n\\nIf the reviewer feels further details on these results would strengthen the main text, we are open to incorporating the key findings on these trade-offs directly into the main paper. This could potentially replace the hard constraints results section, which may have been overstated (as discussed above). We welcome and would appreciate the reviewer\\u2019s feedback on this matter.\"}", "{\"comment\": \"**Q2. Why ViZDoom?**\\n\\nThe core advantages of ViZDoom lie in its speed, lightweight design, cross-platform compatibility, and ease of configuration. Although it may lack realism, it effectively replicates the notion of navigation and interaction in a 3D realm, which is directly tied to real-world problems. ViZDoom remains a widely adopted platform in the RL community and continues to be used [1, 2, 3, 4]. Indeed, as the reviewer has pointed out, it is useful for the platform to be highly customizable for the creation of more diverse tasks. In Appendix E, we have listed several promising extensions for HASARD. Apart from increasing task complexity through configurable parameters, one could combine challenging elements across tasks. The rationale behind selecting ViZDoom as the foundation for HASARD is further elaborated in Appendix F.\\n\\n**Open-World**\\nIn the context of the tasks we designed, the open-world aspect does not significantly impact the Safe RL challenges. Whether the engine procedurally generates more content as the agent ventures further or whether the agent eventually hits a wall has only a marginal effect on task complexity. As demonstrated and discussed earlier, the higher levels of HASARD or the full action space setting already provide a sufficient challenge to current methods. Conversely, having a physically bounded environment enables a more detailed analysis of the agent's movement patterns, allowing us to draw connections between its behavior and the training process or the specific method employed, as examined in Section 5.4. Moreover, tasks with a more bounded scope enhance reproducibility and provide more fair ground for comparison.\\n\\nOpen-world environments like MineDojo are computationally more expensive to run, making them less suitable for limited resources. ViZDoom, on the other hand, uses rendering with sprites and precomputed lighting, making it computationally inexpensive, lightweight, and accessible, while open-world environments often use dynamic lighting, shadows, and voxel-based rendering, adding a computational overhead.\\n\\n\\n**MineDojo**\\nWe agree with the reviewer that a Minecraft-based benchmark holds significant potential for evaluating Safe RL and could challenge similar competencies that HASARD aims to address. Minecraft offers great variety in terms of items, entities, interactions, and possible objectives. A key advantage of MineDojo lies in its extensive knowledge base, built from gameplay videos and the Minecraft Wiki, which could be effectively leveraged for offline Safe RL methods.\\n\\nHowever, for online learning, the simulation speed might pose challenges. Although neither DOOM nor Minecraft were created for RL experiments, Vanilla Minecraft has notable downsides for RL. Its Java-based engine introduces layers of abstraction, memory management overhead, and rendering complexity, which negatively impact performance. Additionally, while ViZDoom allows efficient parallel execution of multiple instances, Minecraft\\u2019s Java implementation with its higher memory usage and single-threaded limitations\\u2014makes scaling less efficient. Nonetheless, we recognize a Minecraft-based Safe RL benchmark as a promising avenue for future work.\\n\\n**Q3. How is P3O Implemented?**\\n\\nWe apologize for any confusion. To clarify, we used the Sample-Factory [5] framework for algorithm benchmarking, not Omnisafe. As for P3O, whether it is generally an effective method due to its algorithmic details falls outside the scope of our work. Our focus is on evaluating widely known popular algorithms within our proposed benchmark rather than assessing the intrinsic effectiveness of individual algorithms like P3O.\\n\\n[1] Park, Junseok, et al. \\\"Unveiling the Significance of Toddler-Inspired Reward Transition in Goal-Oriented Reinforcement Learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 1. 2024.\\n\\n[2] Kim, Seung Wook, et al. \\\"Neuralfield-ldm: Scene generation with hierarchical latent diffusion models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\\n\\n[3] Zhai, Yunpeng, et al. \\\"Stabilizing Visual Reinforcement Learning via Asymmetric Interactive Cooperation.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[4] Valevski, Dani, et al. \\\"Diffusion models are real-time game engines.\\\" arXiv preprint arXiv:2408.14837 (2024).\\n\\n[5] Petrenko, Aleksei, et al. \\\"Sample factory: Egocentric 3d control from pixels at 100000 fps with asynchronous reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2020.\"}", "{\"comment\": \"**W3. Limited Visual Input Analysis**\\n\\nTo demonstrate the visual complexity of HASARD, we leverage the privileged information about the agent's observations in the ViZDoom framework. We create simplified representations through two strategies: segmentation and inclusion of depth information. We hypothesize that simplified representations and auxiliary visual information improve performance.\\n\\n**Segmentation**. ViZDoom provides a labels buffer that assigns a ground truth label to each pixel in the observation, identifying the item, unit, wall, or surface it belongs to, thereby effectively segmenting the scene. We assign a predefined fixed color to each such unique object that can be encountered in our environments. This makes the observations consistent across episodes and environments. With the help of the labels buffer, we then map every pixel in the observation to its respective color. \\n\\nWe train the PPOPID and P3O agents on the segmented observations of *Armament Burden*. This task is suitable due to its high number of different objects: 7 types of weapons, 4 types of decoy items, and 4 types of obstacles. Factoring in the agent, walls, and surfaces, the environment comprises a total of 18 unique labels. The results comparison in the table below indicates that **the simplified representations make the task easier**.\\n\\n|Algorithm|Input|Reward|Cost|\\n|---------|-----|------|----|\\n|PPOPID|Default Obs|4.88|49.97|\\n|PPOPID|Segmentation|5.92|49.22|\\n|P3O|Default Obs|4.38|39.66|\\n|P3O|Segmentation|5.73|38.16|\\n\\n**Depth Information**. ViZDoom provides a depth buffer that assigns a value between 0-255 to each pixel in the observation, representing its relative distance from the agent. A value of 255 (fully white) indicates the farthest points, while 0 (fully black) denotes the closest. Intermediate values reflect their distance, relative to the closest and farthest points. This feature allows the agent to directly perceive the proximity of walls, surfaces, and objects.\\n\\nWe evaluate this approach in the *Precipice Plunge* task, where depth plays a critical role. The agent must accurately gauge distances to platforms below as it attempts to safely descend the cave. Instead of relying solely on the depth buffer, we concatenate its values with the original RGB observations. As indicated by the results below, **learning solely from default observations is more challenging** due to the complexity of accurate depth perception.\\n\\n|Algorithm|Input|Reward|Cost|\\n|---------|-----|------|----|\\n|PPOPID|Default Obs|129.60|50.83|\\n|PPOPID|Default Obs + Depth Buffer|173.15|50.13|\\n|P3O|Default Obs|187.14|121.27|\\n|P3O|Default Obs + Depth Buffer|224.56|102.60|\\n\\n\\n**W5. Real-World Relevance**\\n\\nThe challenges in HASARD do not lie in controlling agents under complex physics or comprehending hyper-realistic visuals but rather in fostering perception, reasoning, and decision-making. Below, we present two examples to further motivate our environments.\\n\\n**Industrial Automation**. Autonomous robots for warehouse logistics navigate simple but dynamic environments where it is important to accurately perceive their surroundings and detect relevant objects. Unlike humanoids, quadrupeds, or precision-focused assembly robots, these systems have simpler action spaces and fewer degrees of freedom. In these settings, robots are tasked with picking and placing items, requiring effective path planning to optimize their operations while adhering to their carrying load (*Armament Burden*). They should accurately detect and avoid obstacles such as shelves, inventory, human workers, other robots, and restricted or unnavigable areas (*Remedy Rush* and *Volcanic Venture*). The robots ought to anticipate the movement and paths of other dynamic entities to avoid collisions, a challenge closely aligned with HASARD\\u2019s focus on predicting future object locations (*Collateral Damage* and *Detonator's Dilemma*). Lastly, due to partial observability, it is beneficial to recall the locations of recently encountered items and entities that are still nearby but no longer within their present field of view (*Armament Burden*).\\n\\n**Autonomous Drones**. Although the agent in HASARD cannot fly, the emphasis on accurately perceiving depth is directly applicable to the safe navigation of autonomous drones. Similar to how the agent in *Precipice Plunge* is tasked to vertically navigate the environment, drones frequently perform vertical maneuvers and must adjust their speed based on the proximity of surfaces and obstacles. General-purpose drones are usually controlled by 6 degrees of freedom (3 translational and 3 rotational), corresponding to 6 continuous actions. This complexity is comparable to HASARD\\u2019s full action space, which includes two continuous and 864 discrete actions. Similar to HASARD\\u2019s tasks, drones must anticipate the movement of other entities, avoid collisions, optimize their flight paths, and take note of recent observations to navigate effectively and safely.\"}", "{\"summary\": \"The paper proposes a new egocentric vision-based 3D simulated environment for benchmarking safe reinforcement learning. The benchmark is more realistic and challenging compared to common prior safe RL benchmark environments. In addition, the paper has evaluations for some safe RL algorithms on the proposed benchmark demonstrating its feasibility of use and the potential for building better approaches to perform more favorably on it.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well motivated and targets an important problem - that of building realistic and reliable RL benchmarks, and more specifically benchmarks for safe RL. This involves addressing challenges with the simple natural of prior benchmarks - both visually and in terms of higher dimensional action space and increased temporal horizons.\", \"The proposed benchmark HASARD is built on top of an existing game engine VizDoom and is able to inherit all of its properties for re-use. The multiple levels in HASARD can be potentially helpful in evaluating different notions of safety in proposed safe RL algorithms.\", \"The paper has detailed evaluations of several safe RL algorithms on HASARD indicating that the framework is feasible for training constrained RL policies. The evaluations reveal that simple algorithms based on PPO and constrained PPO can achieve non-trivial performance in the benchmark and also reasonable constraint satisfaction. It is good to see that these simple algorithms do not saturate the benchmark and there is still a lot of room for improvement.\"], \"weaknesses\": [\"Unfortunately, while the paper is a decent attempt at building a safe RL benchmark, I am not convinced the safe RL community will be incentivized to use it. The main reason is that the notions of constraints in this benchmark are not directly tied to the very pragmatic safety considerations that need to be tackled in the real world - ranging from control systems to robotic deployments.\", \"The benchmark feels a bit incremental compared to the already existing VizDoom framework that has been around for years. The modifications for the different levels and environments in this framework do not capture the notions of open-world generalization and realism the field is headed towards in terms of evaluating RL systems. In addition, a lot of prior safe RL works have bechmakred their systems on real-world systems like robotic navigation and manipulation, and I am not convinced that a modified VizDoom framework is likely to create a reasonable impact in the community.\", \"The evaluations are all with variants of PPO and no other safe RL algorithms are tested. It is unclear why this is the case, since in my understanding the benchmark should not be tied to a particular type of algorithm\"], \"questions\": [\"Please refer to the weaknesses above:\", \"Unfortunately, while the paper is a decent attempt at building a safe RL benchmark, I am not convinced the safe RL community will be incentivized to use it. The main reason is that the notions of constraints in this benchmark are not directly tied to the very pragmatic safety considerations that need to be tackled in the real world - ranging from control systems to robotic deployments. Could the authors clarify how exactly they envision this benchmark to drive innovation in the safe RL community? And what sub-field of researchers would be likely to use it?\", \"The benchmark feels a bit incremental compared to the already existing VizDoom framework that has been around for years. Can the authors clarify if the proposed modifications are non-trivial and if they can be broadly applied to potentially other frameworks like Minecraft and other games?\", \"The evaluations are all with variants of PPO and no other safe RL algorithms are tested. It is unclear why this is the case, since in my understanding the benchmark should not be tied to a particular type of algorithm. Please clarify the evaluations and if there is any specific assumption on the type of safe RL algorithms that could be tested on the benchmark?\", \"Can the authors make 1-1 comparisons with the proposed benchmark and the features of prior simulated and real world benchmarks that have been used by safe RL papers in the past?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful review and for acknowledging the improvements in our rebuttal.\\n\\nSince OmniSafe is a well-established Safe RL library, we initially used it to validate baseline performance consistency with our Sample-Factory implementations. However, since Sample-Factory offers significantly faster training speeds, we used it for all the experiments in the paper. The implementation of P3O can be found in *sample_factory/algo/learning/p3o_learner.py*.\\n\\nAs OmniSafe does not natively support discrete action spaces or pixel-based observations, we extended the library to handle these requirements. To provide flexibility for benchmark users who may prefer OmniSafe, we included our extended version as a local module in the repository rather than relying on the pip-installed version. Before making the codebase public, we will update the documentation to make this setup and purpose more transparent.\\n\\nIf there are any further weaknesses in our work or points that need to be clarified, we are happy to address them. Thank you again for your feedback.\"}", "{\"comment\": \"**W3 & Q3. Only PPO variations**\\n\\nIt just so happens that the best versions of these popular safe RL methods work best with PPO. Most of the Safe RL algorithms we evaluated are not constrained to a specific base RL algorithm, and neither is the benchmark. The PPO variants are known to have the best performance. We also included TRPO implementations of some safe RL methods. We present the results of Level 1 in the following table. The results satisfying the safety constraint are depicted in *cursive*, and the one of these with the highest reward is depicted in **bold**.\\n\\nWe acknowledge the concern regarding the focus on PPO variants in our evaluations and would like to clarify that the benchmark itself is not tied to any specific type of algorithm. Equally, the Safe RL methods we selected are inherently independent of any particular base RL algorithm. However, in the original papers, the authors frequently utilized PPO as a base for their model-free methods. We therefore only included the PPO-based version not only because those have achieved higher results on prior benchmarks, but also not to compare apples with oranges, i.e., not to conflate differences in performance due to the choice of the base RL algorithm.\\n\\nTo provide additional context, we also included TRPO-based implementations for some Safe RL methods. In the following table, we present the results for Level 1. Results that meet the safety constraint are denoted in *italic*, and the highest reward safe result is depicted in **bold**.\\n\\n\\n|Method|AB R\\u2191|AB C\\u2193|VV R\\u2191|VV C\\u2193|RR R\\u2191|RR C\\u2193|CD R\\u2191|CD C\\u2193|PP R\\u2191|PP C\\u2193|DD R\\u2191|DD C\\u2193|\\n|------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|\\n|PPO|9.68|109.30|51.64|172.93|50.78|52.44|78.61|41.06|243.42|475.62|29.67|14.28|\\n|TRPO|7.24|144.79|41.00|185.30|45.70|48.47|67.91|37.29|243.41|468.99|20.03|8.92|\\n|PPOLag|7.51|52.41|42.40|52.00|36.37|5.25|29.09|5.61|*147.24*|*44.96*|21.49|5.62|\\n|TRPOLag|*4.06*|*48.61*|***30.48***|*49.91*|*21.69*|*4.98*|31.63|7.28|177.93|309.16|16.84|6.85|\\n|PPOPID|**8.99**|*49.79*|45.23|50.53|***38.19***|*4.90*|**43.27**|5.03|***231.53***|*43.91*|**26.51**|5.25|\\n|TRPOPID|*0.36*|*8.18*|23.81|50.93|*10.36*|*4.54*|34.74|10.13|172.87|299.46|12.81|6.84|\\n\\nSimilar to prior findings, the TRPO versions of these methods consistently perform noticeably worse than their PPO counterparts. Also note, that evaluating all possible Safe RL methods is beyond the scope of our work.\\n\\n**Q4. 1-1 Comparisons with Prior Benchmarks?**\\n\\nCould the reviewer clarify what specific features or real-world benchmarks they are referring to? It is not entirely clear to us what exactly is being requested. We've included a performance comparison with Safety-Gymnasium in **Appendix C** of the paper. Among prior Safe RL benchmarks, Safety-Gymnasium stands out as both the most widely adopted in recent research and the most complex and similar to our work. Please let us know what other features or benchmarks we should compare.\"}", "{\"comment\": \"**W4. Only Navigation Tasks**\\n\\nIndeed, most tasks in popular benchmarks such as Safety-Gymnasium (1st-person POV) [1], BulletSafetyGym (proprioceptive) [2], and MetaDrive (3rd-person POV) [3] primarily focus on the agent navigating toward a goal location while avoiding hazardous objects. However, they are comparatively simpler to solve. For example, many Safe RL methods that rely on a cost critic, only require it to learn to assign low values to observations where hazards are prominently visible in the center of the screen, and higher values to observations where hazards are either absent or located at the edges. This allows the critic to provide accurate value estimates based on a single observation, without requiring any history or further context.\\n\\nIn contrast, the scenarios in HASARD require a deeper level of perception and decision-making. Agents must comprehend spatial relationships, accurately perceive depth, and predict the movement and future locations of entities. Only the **Remedy Rush** task has a navigation objective similar to the prior benchmarks. However, it features more complex visual surroundings, a greater variety of items to collect, and objects to avoid. In contrast to Safety-Gymnasium\\u2019s visually simplistic environments, with distinct, single-palette cylindrical goals and hazard zones that are easy to differentiate, the collectibles and hazardous objects in *Remedy Rush* are more difficult to identify.\\n\\nThe other 5 tasks each have their unique nature and distinct objective. **Volcanic Venture** could be considered most similar, as the agent does have to avoid certain zones when navigating the area. However, the platforms, the agent should stay on, periodically switch locations, are of different height and in constant motion. These attributes far extend the static Safety-Gymnasium environments.\\n\\n**Armament Burden** introduces another layer of partial observability on top of the conventional limited field of view with egocentric perception. The agent has a carrying load that is not in any way observable from the pixels on the screen. For instance, acquiring a weapon with an empty inventory incurs no immediate cost. However, if the agent has already picked up other items, this could exceed its carrying capacity. \\n\\nSimilarly, in **Detonator's Dilemma**, detonating a barrel near a neutral unit with high health points might result in minimal consequences, whereas doing so near units with low HP could incur high cost. The agent should not only accurately perceive the distance between the neutral units and the barrels, but also the barrels themselves, to prevent unintended chain explosions. \\n\\nThe core difficulty in **Collateral Damage** lies in accurately firing the weapon, as the projectile takes time to reach its destination. Both neutral and enemy units are in constant motion, making it difficult to predict their future positions in relation to the projectile. The agent must also balance risks and rewards, considering whether targeting a cluster of enemies near neutral units is worth the cost within the allowable budget.\\n\\nIn **Precipice Plunge**, the central challenge lies in depth perception, as the agent must find a safe path to carefully descend down the cave and avoid fall damage. This type of task is explored in any of the prior benchmarks. As the difficulty level increases, the randomized terrain becomes steeper and some blocks are in constant vertical movement. Moreover, the cave becomes progressively darker as the agent ventures deeper.\\n\\nThe answer to Q1 will further outline the challenges of HASARD environments. Moreover, in Appendix A of the paper, we have described the characteristics and complexities of each environment.\\n\\n[1] Ji, Jiaming, et al. \\\"Safety gymnasium: A unified safe reinforcement learning benchmark.\\\" Advances in Neural Information Processing Systems 36 (2023).\\n\\n[2] Gronauer, Sven. \\\"Bullet-safety-gym: A framework for constrained reinforcement learning.\\\" (2022).\\n\\n[3] Li, Quanyi, et al. \\\"Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning.\\\" IEEE transactions on pattern analysis and machine intelligence 45.3 (2022): 3461-3475.\"}", "{\"comment\": \"Thank you for reviewing our response. We have made further efforts to address the concerns raised by all reviewers:\\n\\n**Inclusion of Modern Baselines**. We integrated the state-of-the-art SafeDreamer baseline to demonstrate the relevance and complexity of HASARD. This inclusion further highlights that the benchmark remains unsolved.\\n\\n**Real-World Applicability**. In our response to Reviewer NtcD, we provided examples of realistic applications where the HASARD environments could be highly relevant.\\n\\n**Physical Realism**. We acknowledge the lack of physical realism in our chosen platform and have thoroughly addressed this point in our responses, as it was raised by multiple reviewers.\\n\\nWe kindly ask the reviewer to consider our recent responses to these issues provided to other reviewers, as they directly address the core concerns outlined in your feedback. We appreciate your time and evaluation of our work.\"}", "{\"comment\": \"**W4. Action Space Limitation & Q3. How to benchmark Continuous RL Methods?**\\n\\nWe thank the reviewer for raising this point, as it is an important aspect of comparing HASARD with prior benchmarks. Indeed, HASARD does not support exclusively continuous action spaces, but discrete and hybrid (discrete + continuous). Real-world applications also often rely on discrete or hybrid action spaces, as seen in domains like healthcare [1], finance [2], robotics [3], agriculture [4], and energy systems [5]. Therefore, we would argue that it is rather a strength than a weakness that HASARD supports discrete and hybrid action spaces since prior popular Safe RL benchmarks (Safety-Gymnasium [6], Safe-Control-Gym [7], CARLA [8]) lack this setting. Only AI Safety Gridworlds [9] is uniquely made for discrete actions, but it is 2D and very simplistic.\\n\\nHowever, if a method is only compatible with continuous action spaces, then it \\n1) reflects a limitation of the method's generality.\\n2) means that it is tailored for a narrower use case. Similarly, some popular RL methods are specifically designed for discrete action spaces, such as MuZero [10], Agent57 [11], and Rainbow [12].\\n3) could be adapted to support a discrete/hybrid action space with little ease and no noticeable performance decrease. Adapting a continuous-action method to support discrete action spaces is typically more straightforward than the reverse. This often involves modifying the action sampling process to output probabilities for each discrete action and using techniques like softmax for sampling during training. For instance, one only needs to replace the policy's Gaussian output distribution with a categorical one, such as for SAC-Discrete [13].\\n\\n\\n[1] Raghu, Aniruddh, et al. \\\"Deep reinforcement learning for sepsis treatment.\\\" arXiv preprint arXiv:1711.09602 (2017).\\n\\n[2] Hambly, Ben, Renyuan Xu, and Huining Yang. \\\"Recent advances in reinforcement learning in finance.\\\" Mathematical Finance 33.3 (2023): 437-503.\\n\\n[3] Zeng, Andy, et al. \\\"Learning synergies between pushing and grasping with self-supervised deep reinforcement learning.\\\" 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018.\\n\\n[4] Abioye, Emmanuel Abiodun, et al. \\\"Precision irrigation management using machine learning and digital farming solutions.\\\" AgriEngineering 4.1 (2022): 70-103.\\n\\n[5] Glavic, Mevludin, Rapha\\u00ebl Fonteneau, and Damien Ernst. \\\"Reinforcement learning for electric power system decision and control: Past considerations and perspectives.\\\" IFAC-PapersOnLine 50.1 (2017): 6918-6927.\\n\\n[6] Ji, Jiaming, et al. \\\"Safety gymnasium: A unified safe reinforcement learning benchmark.\\\" Advances in Neural Information Processing Systems 36 (2023).\\n\\n[7] Yuan, Zhaocong, et al. \\\"Safe-control-gym: A unified benchmark suite for safe learning-based control and reinforcement learning in robotics.\\\" IEEE Robotics and Automation Letters 7.4 (2022): 11142-11149.\\n\\n[8] Dosovitskiy, Alexey, et al. \\\"CARLA: An open urban driving simulator.\\\" Conference on robot learning. PMLR, 2017.\\n\\n[9] Leike, Jan, et al. \\\"AI safety gridworlds.\\\" arXiv preprint arXiv:1711.09883 (2017).\\n\\n[10] Schrittwieser, Julian, et al. \\\"Mastering atari, go, chess and shogi by planning with a learned model.\\\" Nature 588.7839 (2020): 604-609.\\n\\n[11] Badia, Adri\\u00e0 Puigdom\\u00e8nech, et al. \\\"Agent57: Outperforming the atari human benchmark.\\\" International conference on machine learning. PMLR, 2020.\\n\\n[12] Hessel, Matteo, et al. \\\"Rainbow: Combining improvements in deep reinforcement learning.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018.\\n\\n[13] Christodoulou, Petros. \\\"Soft actor-critic for discrete action settings.\\\" arXiv preprint arXiv:1910.07207 (2019).\"}", "{\"comment\": \"**W2 & Q2. Incremental to ViZDoom.**\\n\\nWhile HASARD builds upon the ViZDoom platform, it is far from a mere incremental extension of the original ViZDoom environments. Not only do they lack the means of evaluating safe RL, but are static and overly simplistic in terms of variety, visuals, map layout, and objectives, focusing on simple navigation to a target location (*my_way_home*, *deadly_corridor*, *health_gathering*) or turning/sidestepping and shooting enemies (*defend_the_center*, *defend_the_line*, *predict_position*). Additionally, all the original ViZDoom scenarios are restricted to flat, 2D surfaces, limiting their complexity and challenge. In contrast, we carefully designed the HASARD environments with more complex terrains and visuals, also introducing a notion of **cost** and exhibiting reward-cost trade-offs. We've incorporated novel **dynamic elements** in each environment: 1) changing and moving platforms in *Volcanic Venture*, 2) various items spawns and random pitfall locations in *Armament Burden*, 3) units with sporadic behaviour and random entity and object spawns in *Collateral Damage* and *Detonator's Dilemma*, 4) random layout and moving platforms in *Precipice Plunge*, and 5) dynamic lighting and item spawns in *Remedy Rush*. Additionally, HASARD introduces difficulty levels, which are absent in the original ViZDoom scenarios. The levels are not merely achieved by tweaking parameters but introduce novel elements.\\n\\nWe agree with the reviewer that advancing towards open-world generalization and realism is crucial for the field. However, to our knowledge, the majority of recent works in the field continue to rely on platforms like Safety-Gymnasium [1, 2, 3, 4, 5], MetaDrive [3], Safe Control Gym [6], MuJoCo [5, 6], or RoboSuite [6], which are far removed from realistic applications. We would appreciate it if the reviewer could provide references to recent works that utilize real-world settings or highly realistic environments for Safe RL or benchmark their systems on real-world systems.\\n\\nUsing ultra-realistic simulation environments for research poses challenges of computational overhead. Incorporating highly detailed physics and advanced visual rendering substantially increases the cost and time required for simulations. HASARD is intentionally designed to strike a balance between realism and computational efficiency, making it accessible to low-budget research settings. This approach broadens access to vision-based Safe RL research, enabling not only more researchers to engage with the field but also to expand the scope of evaluation from robotic motor control. Note, that ViZDoom still remains widely adopted in recent research [7, 8, 9, 10].\\n\\n[1] Zhao, Weiye, et al. \\\"Guard: A safe reinforcement learning benchmark.\\\" arXiv preprint arXiv:2305.13681 (2023).\\n\\n[2] Hoang, Huy, Tien Mai, and Pradeep Varakantham. \\\"Imitate the good and avoid the bad: An incremental approach to safe reinforcement learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 11. 2024.\\n\\n[3] Huang, Weidong, et al. \\\"Safe dreamerv3: Safe reinforcement learning with world models.\\\" arXiv preprint arXiv:2307.07176 (2023).\\n\\n[4] Zhao, Weiye, et al. \\\"Implicit Safe Set Algorithm for Provably Safe Reinforcement Learning.\\\" arXiv preprint arXiv:2405.02754 (2024).\\n\\n[5] Li, Zeyang, et al. \\\"Safe reinforcement learning with dual robustness.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2024).\\n\\n[6] Gu, Shangding, et al. \\\"A Review of Safe Reinforcement Learning: Methods, Theories and Applications.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2024).\\n\\n[7] Park, Junseok, et al. \\\"Unveiling the Significance of Toddler-Inspired Reward Transition in Goal-Oriented Reinforcement Learning.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 1. 2024.\\n\\n[8] Kim, Seung Wook, et al. \\\"Neuralfield-ldm: Scene generation with hierarchical latent diffusion models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.\\n\\n[9] Zhai, Yunpeng, et al. \\\"Stabilizing Visual Reinforcement Learning via Asymmetric Interactive Cooperation.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[10] Valevski, Dani, et al. \\\"Diffusion models are real-time game engines.\\\" arXiv preprint arXiv:2408.14837 (2024).\"}", "{\"comment\": \"We greatly appreciate your thoughtful engagement with our work. As we have not detected any further concerns from the reviewer, we kindly ask that you consider increasing the score if you believe we have successfully addressed all the points raised. We also hope the additional experiments and analyses conducted during the rebuttal period, in response to other reviewers' feedback, further strengthen the contributions of our work.\"}", "{\"comment\": \"I am pleased to see that, compared to my previous review of this paper, the authors have provided more insightful motivations and perspectives in their rebuttal during the discussion with other reviewers regarding the task\\u2019s motivation and details. If these insights can be well-integrated into the paper along with more comprehensive experiments and analyses, it will significantly strengthen the contributions of this work.\\nAdditionally, I appreciate the authors' strong defense of their starting point and acknowledge the validity of most of their arguments.\\nBesides, I noticed that the authors discussed with Reviewer FzXY the use of OmniSafe. Upon reviewing the authors\\u2019 publicly available code, I found a folder named \\\"OmniSafe,\\\" and it seems that the P3O algorithm only appears within it. However, the authors stated that they used Sample-Factory, which leaves me confused.\"}", "{\"comment\": \"We apologize for the delayed response. Adapting model-based methods like SafeDreamer to a new setting is a complex and time-intensive process due to their high sensitivity. We kindly request that the reviewer nevertheless take our responses into consideration.\\n\\n**W1 & Q1. ViZDoom is pixelated and not 3D**\\n\\nIndeed, ViZDoom uses rendering with sprites and precomputed lighting, whereas modern games use elements of true 3D graphics, such as dynamic lighting, shadows, and voxel-based rendering. However, this graphical complexity comes with a computational overhead. \\n\\nIt is important to note that when benchmarking RL in simulation environments, the critical aspect is not necessarily how the environment is rendered under the hood, but how the environment appears to the agent. Accurate 3D dynamics are essential for contexts like robotics simulations and modern video games, where precise collision calculations and complex rigid-body interactions significantly impact the outcome or user experience. However, HASARD focuses on a different dimension of complexity.\\n\\nOur aim with HASARD is to expand the scope beyond basic navigation and continuous robotic motor control tasks. While environment suites like CARLA provide high-fidelity graphics and Safety-Gymnasium offers sophisticated physics simulations, HASARD serves as a complementary benchmark rather than a direct competitor. While realistic physics and lifelike environments are undoubtedly important, HASARD emphasizes fostering higher-order comprehension and decision-making.\\n\\nHASARD's scenarios require agents to perceive spatial relationships, accurately gauge depth, and predict the movement and future positions of entities. This focus on reasoning, planning, and decision-making allows HASARD to tackle a different dimension of safe RL. Additionally, HASARD achieves this at a fraction of the computational cost of physics-based simulation environments, making it particularly well-suited for sample-hungry, model-free approaches. For these purposes, HASARD effectively serves as a 3D benchmark. Although visually unrealistic, it closely replicates the intricacies of navigating the real world.\\n\\n**W2 & Q2. Narrow Range of Baselines**\\n\\n|Method|AB R\\u2191|AB C\\u2193|VV R\\u2191|VV C\\u2193|RR R\\u2191|RR C\\u2193|CD R\\u2191|CD C\\u2193|PP R\\u2191|PP C\\u2193|DD R\\u2191|DD C\\u2193|\\n|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n|PPO|1.99|118.22|42.20|347.77|53.16|68.27|34.86|84.44|487.17|894.22|49.36|23.72|\\n|PPOCost|*0.04*|*0.05*|*10.61*|*14.76*|*0.01*|*0.02*|*3.93*|*1.35*|*15.39*|*6.64*|49.95|19.79|\\n|PPOLag|2.03|*31.89*|22.77|52.54|8.02|9.74|12.22|5.29|23.87|*34.98*|*19.27*|5.26|\\n|PPOSaut\\u00e9|2.32|*37.68*|18.89|246.80|*1.17*|*3.50*|*2.74*|*2.76*|101.64|176.14|22.20|10.36|\\n|PPOPID|*2.78*|*33.89*|*25.80*|*49.02*|13.51|5.14|*14.51*|*4.97*|*54.02*|*49.57*|*20.49*|*4.87*|\\n|P3O|*2.61*|*29.94*|*24.93*|*49.84*|*14.29*|*4.19*|13.57|*4.12*|269.14|428.72|26.73|7.49|\\n|SafeDreamer|*3.11*|*48.48*|*33.07*|*49.60*|*19.81*|*4.79*|***21.57***|*4.77*|71.22|44.03|36.33|*4.60*|\\n|Human|***8.03***|*46.43*|***38.06***|*41.12*|***34.21***|*3.17*|19.43|*4.77*|***147.32***|*47.45*|***43.90***|*4.02*|\\n\\nWe have included **SafeDreamer** [1], the most recent state-of-the-art Safe RL method in our evaluation. Similarly to other benchmarks, SafeDreamer surpasses other baselines, achieving the highest results across all environments. However, it only surpasses the human baseline on *Collateral Damage*, a task requiring high precision and accurate timing. It does yet not manage to develop clever strategies in *Armament Burden* (discarding heavy weapons and avoiding decoy items), *Precipice Plunge* (restarting the episode when a safe action can no longer be performed), and *Remedy Rush* (immediately seeking out the night vision goggles to gain permanent vision). Therefore, in these environments the human baseline, although weaker in control and accuracy, still outperforms SafeDreamer thanks to a better strategy and knowledge of the task. The results of evaluating SafeDreamer show that HASARD environments provide ample room for algorithmic improvement, not to mention when also utilizing the full action space with 864 discrete and 2 continuous actions, which could make control far more efficient. \\n\\n[1] Huang, Weidong, et al. \\\"Safe dreamerv3: Safe reinforcement learning with world models.\\\" arXiv preprint arXiv:2307.07176 (2023).\"}", "{\"comment\": \"The reviewer thanks the authors for their response. I now understand your point.\"}", "{\"summary\": \"This paper introduces the HASARD, a benchmark designed for egocentric pixel-based safe RL in diverse and stochastic 3D environments. Unlike existing benchmarks, HASARD emphasizes spatial comprehension, short-term planning, and active prediction for high rewards while ensuring safety. It offers three difficulty levels, supporting both soft and hard safety constraints. The benchmark includes heatmaps for visual analysis, aiding in strategy development. By targeting vision-based embodied safe RL, HASARD addresses the need for benchmarks mirroring real-world complexities. The paper's contributions include the design of six novel ViZDoom environments with safety constraints, integration with Sample-Factory for rapid simulation and training. Evaluation of baseline methods within HASARD highlights challenges in balancing performance and safety under constraints.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper demonstrates several notable strengths across the dimensions of originality, quality, clarity, and significance:\\n\\n \\n\\n1. **Originality**: It introduces HASARD, a benchmark specifically designed for vision-based embodied safe reinforcement learning (RL) in complex 3D environments. \\n\\n \\n\\n2. **Quality**: Comprehensive design of 6 diverse environments with 3 difficulty levels each, offering a range of challenges. \\n\\n \\n\\n3. **Clarity**: The paper is structured in a logical and coherent manner, facilitating the understanding of complex concepts. \\n\\n \\n\\n4. **Significance**: The paper Addresses an important need in safe RL research for more realistic and challenging benchmarks. It enables systematic evaluation and comparison of safe RL algorithms in vision-based 3D settings.\", \"weaknesses\": \"While the paper makes valuable contributions, several areas could be improved:\\n\\n \\n\\n1. The paper refers to ViZDoom as a 3D environment, but its pixelated, less detailed graphics compared to modern 3D games challenge this characterization. \\n\\n \\n\\n2. **Narrow Range of Baselines**: Evaluations focus primarily on PPO-based algorithms. Incorporating approaches like model-based safe RL or constrained policy optimization (e.g., https://arxiv.org/abs/2210.07573) would enhance the assessment. \\n\\n \\n\\n3. **Limited Visual Input Analysis**: Though vision-based learning is emphasized, the paper lacks analysis of how visual complexity influences performance. Exploring different visual conditions (lighting, distractors) and comparing raw pixels with simplified representations would highlight the unique challenges of vision-based safe RL, especially since the visual inputs in the environment appear less realistic. \\n\\n \\n\\n4. **Action Space Limitation**: Only discrete action spaces are supported. It is unclear how continuous safe RL algorithms would be benchmarked. \\n\\n \\n\\n5. **Real-World Relevance**: The connection between the benchmark tasks and real-world safe RL challenges needs clearer articulation. Providing examples of practical applications would strengthen motivation. \\n\\n \\n\\nAddressing these points could strengthen the paper and increase the impact and utility of the HASARD benchmark for the safe RL research community.\", \"questions\": \"1. Is ViZDoom truly a 3D environment, considering its graphics appear pixelated and less detailed compared to modern 3D games?\\n\\n \\n\\n2. Why are the baseline algorithms limited to PPO-based approaches? Could the paper include more diverse methods, such as model-based safe RL or constrained policy optimization (e.g., https://arxiv.org/abs/2210.07573)? \\n\\n \\n\\n3. How can continuous safe RL algorithms be benchmarked when the paper only supports discrete action spaces?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for reviewing our response. To address your biggest concern, we have included **SafeDreamer** [1], the most recent state-of-the-art Safe RL method, in our evaluation. Similarly to other benchmarks, SafeDreamer surpasses prior baselines, achieving the highest results across all environments. However, it only surpasses the human baseline on *Collateral Damage*, a task requiring high precision and timing. It does yet not manage to develop clever strategies in *Armament Burden* (discarding heavy weapons and avoiding decoy items), *Precipice Plunge* (restarting the episode when a safe action can no longer be performed), and *Remedy Rush* (immediately seeking out the night vision goggles to gain permanent vision). Therefore, in these environments the human baseline, although weaker in control and accuracy, still outperforms SafeDreamer thanks to a better strategy and knowledge of the task. The results of evaluating SafeDreamer show, that HASARD environments provide ample room for algorithmic improvement, not to mention when also utilizing the full action space with 864 discrete and 2 continuous actions, which could make control far more efficient. We hope that the inclusion of SafeDreamer is sufficient to satisfy the reviewer's final concern.\\n\\n|Method|AB R\\u2191|AB C\\u2193|VV R\\u2191|VV C\\u2193|RR R\\u2191|RR C\\u2193|CD R\\u2191|CD C\\u2193|PP R\\u2191|PP C\\u2193|DD R\\u2191|DD C\\u2193|\\n|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n|PPO|1.99|118.22|42.20|347.77|53.16|68.27|34.86|84.44|487.17|894.22|49.36|23.72|\\n|PPOCost|*0.04*|*0.05*|*10.61*|*14.76*|*0.01*|*0.02*|*3.93*|*1.35*|*15.39*|*6.64*|49.95|19.79|\\n|PPOLag|2.03|*31.89*|22.77|52.54|8.02|9.74|12.22|5.29|23.87|*34.98*|*19.27*|5.26|\\n|PPOSaut\\u00e9|2.32|*37.68*|18.89|246.80|*1.17*|*3.50*|*2.74*|*2.76*|101.64|176.14|22.20|10.36|\\n|PPOPID|*2.78*|*33.89*|*25.80*|*49.02*|13.51|5.14|*14.51*|*4.97*|*54.02*|*49.57*|*20.49*|*4.87*|\\n|P3O|*2.61*|*29.94*|*24.93*|*49.84*|*14.29*|*4.19*|13.57|*4.12*|269.14|428.72|26.73|7.49|\\n|**SafeDreamer**|*3.11*|*48.48*|*33.07*|*49.60*|*19.81*|*4.79*|***21.57***|*4.77*|71.22|44.03|36.33|*4.60*|\\n|Human|***8.03***|*46.43*|***38.06***|*41.12*|***34.21***|*3.17*|19.43|*4.77*|***147.32***|*47.45*|***43.90***|*4.02*|\\n\\n[1] Huang, Weidong, et al. \\\"Safe dreamerv3: Safe reinforcement learning with world models.\\\" arXiv preprint arXiv:2307.07176 (2023).\"}", "{\"summary\": \"HASARD is a benchmark testing platform specifically designed for safe reinforcement learning, based on ViZDoom, providing a diverse range of 3D environments.\\n\\n1. The tasks on this platform require agents to pursue high rewards while considering safety strategies, moving beyond simple 2D navigation to incorporate complex elements such as spatial understanding.\\n2. HASARD offers three difficulty levels and supports both soft and hard safety constraints, flexibly adapting to varying safety requirements.\\n3. The platform integrates Sample-Factory, enabling high-speed simulation that allows agents to address real-world safety challenges while reducing computational costs.\\n4. HASARD includes six environments based on ViZDoom and benchmarks various methods to demonstrate the limitations of existing technologies.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors tested six baseline algorithms on HASARD and provided an analysis of the results.\\n2. The tasks move beyond simple 2D navigation to incorporate complex elements such as spatial understanding\", \"weaknesses\": \"1. The reviewer believes that if the distinction between soft and hard constraints is merely based on whether the threshold is $0$, then other benchmarks share this characteristic, making this claim somewhat unsubstantiated.\\n2. Although multiple methods were tested in the current experiments, there is a lack of analysis on performance under different safety budgets. It is recommended to include experiments with varying safety thresholds to better understand the trade-off between safety and reward for each algorithm.\\n3. HASARD is based on the ViZDoom game engine, which, while computationally inexpensive, lacks detailed simulation of real-world physics.\\n4. The anonymous video link provided by the authors is inaccessible.\", \"questions\": \"1. The article does not provide an in-depth analysis of performance under different safety budgets. Is there a plan to supplement the experiments with varying safety thresholds to comprehensively demonstrate the trade-offs between reward and safety for each algorithm? This would be very helpful in understanding the adaptability of different methods under various safety requirements.\\n2. Considering the limitations of ViZDoom in simulating real-world physics, have the authors explored other engines with superior physical simulation capabilities (e.g., Isaac Gym)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper presents a new benchmark intended to be used for evaluating safe reinforcement learning algorithms, built on the ViZDoom platform. The benchmark introduces a variety of new environments and tasks. Experiments with a few baselines illustrate the headroom on the benchmark (current methods do not saturate it, but also achieve non-trivial performance). The claimed contribution is that it is the first to exclusively target vision-based safe RL.\\n\\n### Strengths\\nReviewers commented on the variety of environments and challenges introduced and the utility of the structured curriculum (three difficulty levels), analysis of a variety of baselines and that the benchmark requires solving tasks beyond simple 2D navigation, that the paper targets the important / significant problem of building realistic and reliable RL and safe RL benchmarks, detailed evaluations of a variety of baselines, the originality of the contribution, and the comprehensive design\\n\\n### Weaknesses\\nReviewers commented that some of the baselines were outdated, potential solvability by existing algorithms, unclear contribution relative to the Safety Gymnasium benchmark (including that most of the tasks revolve around avoiding hazardous obstacles), that the baselines were narrow, there was limited visual input analysis required, the action space was limited, and that the connection between the benchmark and real-world tasks was unclear.\\n\\nThe authors responded to these weaknesses (see my summary of the details below), and in balance, sufficiently addressed and dispelled arguments for rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer FzXY (rating 8, confidence 4) found their concerns addressed by the authors\\n\\nReviewer NMGZ (rating 5, confidence 4) increased their score from 3 to 5 after the rebuttal. The reviewer did not clarify what concerns remained when prompted by the authors.\\n\\nReviewer Rd3K (rating 5, confidence 4) remained unconvinced by the significance of the benchmark, stating \\n> I am not convinced the safe RL community will be incentivized to use it. The main reason is that the notions of constraints in this benchmark are not directly tied to the very pragmatic safety considerations that need to be tackled in the real world - ranging from control systems to robotic deployments.\\n\\nand \\n> The benchmark feels a bit incremental compared to the already existing VizDoom framework that has been around for years.\\n\\nand \\n> I am still not convinced about the significance of the proposed benchmark, the applicability of algorithms tested on it to other more realistic scenarios where safety would be actually meaningful\\n\\nand \\n\\n> I do not see how any form of safety guarantees from this environment can realistically translate to these applications that the authors have cited\\n\\nThe crux of these concerns seems to stem from the lack of physical realism (no simulated physics in the benchmark). However, while the reviewer may turn out to be correct that the safe RL community may not use the benchmark, I do not think this speculation is sufficient to warrant rejection. Because the paper and benchmark are otherwise solid and have clearly new contributions, I would err on the side of letting the safe RL community decide whether it is a meaningful benchmark. Furthermore, regarding the \\\"form of safety guarantees\\\" comment from the reviewer, the authors did not make the claim that the benchmark's value would be to develop methods that provided safety guarantees (I do not think that that is an agreed-upon definition of the purpose of safe RL research / methods). It is plausible that this benchmark could serve as an important and much-needed vehicle for future safe RL research that produces safe RL methods that contain no safety guarantees (e.g., those that only deliver empirically high performance).\\n\\nReviewer NtcD (rating 5, confidence 3) did not respond to the author's response. The authors, in my judgement, convincingly addressed their concerns.\"}" ] }
5B6eSE6l4M
Performance Heterogeneity in Message-Passing and Transformer-based Graph Neural Networks
[ "Lukas Fesser", "Melanie Weber" ]
Graph Neural Networks have emerged as the most popular architecture for graph-level learning, including graph classification and regression tasks, which frequently arise in areas such as biochemistry and drug discovery. Achieving good performance in practice requires careful model design. Due to gaps in our understanding of the relationship between model and data characteristics, this often requires manual architecture and hyperparameter tuning. This is particularly pronounced in graph-level tasks, due to much higher variation in the input data than in node-level tasks. To work towards closing these gaps, we begin with a systematic analysis of individual performance in graph-level tasks. Our results establish significant performance heterogeneity in both message-passing and transformer-based architectures. We then investigate the interplay of model and data characteristics as drivers of the observed heterogeneity. Our results suggest that graph topology alone cannot explain heterogeneity. Using the Tree Mover’s Distance, which jointly evaluates topological and feature information, we establish a link between class-distance ratios and performance heterogeneity in graph classification. These insights motivate model and data preprocessing choices that account for heterogeneity between graphs. We propose a selective rewiring approach, which only targets graphs whose individual performance benefits from rewiring. We further show that the optimal network depth depends on the graph’s spectrum, which motivates a heuristic for choosing the number of GNN layers. Our experiments demonstrate the utility of both design choices in practice.
[ "Graph Neural Networks", "Transformers", "Rewiring", "Example Hardness", "Generalization" ]
Reject
https://openreview.net/pdf?id=5B6eSE6l4M
https://openreview.net/forum?id=5B6eSE6l4M
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xohb5xGsO6", "rixPwBAFIL", "o51z9tCQWs", "o3qKOtDk7y", "mMWSOyVSUX", "iC8War0QR9", "aDbK3PD1ld", "SxBBT79S5r", "OvclklsgLv", "A04R5s3Bwv", "8LAg8UsOBI", "5UKBhoT4QI", "2yu6tcbG8E" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "decision", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732031604058, 1732031762305, 1732786967083, 1732042550721, 1732495054493, 1730710687205, 1730566337553, 1733222515817, 1730075327213, 1737524178328, 1730295408782, 1732044087321, 1734608280337 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12298/Authors" ], [ "ICLR.cc/2025/Conference/Submission12298/Authors" ], [ "ICLR.cc/2025/Conference/Submission12298/Area_Chair_bS8C" ], [ "ICLR.cc/2025/Conference/Submission12298/Authors" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_wPWz" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_Z81b" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_pTz2" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_pTz2" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_2jpS" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12298/Reviewer_wPWz" ], [ "ICLR.cc/2025/Conference/Submission12298/Authors" ], [ "ICLR.cc/2025/Conference/Submission12298/Area_Chair_bS8C" ] ], "structured_content_str": [ "{\"comment\": [\"We thank the reviewer for the feedback and suggestions on how to improve the paper. Regarding the weakness that the reviewer mentions:\", \"LRGB is the most common benchmark for testing how well models encode long-range interactions between nodes. It is generally considered a \\u201cchallenging\\u201d benchmark. However, we will add additional results on the suggested OGB tasks in the next version of the manuscript.\", \"The selective rewiring approach and depth heuristic are informed by the graph\\u2019s topology (characterized by the spectrum of the Graph Laplacian) and should be computed for the data set at hand. Our experimental results corroborate the utility of those heuristics.\"]}", "{\"comment\": [\"We thank the reviewer for the encouraging feedback.\", \"We provide details on hyperparameter choices in the appendix. We will expand this description in the next version.\", \"To the best of our knowledge, this work is the first to systematically study performance heterogeneity in graph-level learning. We would be grateful for any pointers to \\\"other state-of-the-art methods\\\" that the reviewer can provide.\", \"LRGB is a commonly used, large-scale benchmark. We will add additional results on the suggested OGB tasks in the next version of the manuscript.\"], \"response_to_questions\": [\"We find heterogeneity to appear with various widths, depths, learning rates, activation functions, and dropout rates.\", \"We order the graphs by their indices in the datasets, allowing us to compare models.\"]}", "{\"comment\": \"I would like to encourage the reviewers to engage with the author's replies if they have not already done so. At the very least, please\\nacknowledge that you have read the rebuttal.\"}", "{\"comment\": \"We thank the reviewer for the feedback on our submission.\\n\\n> There is certainly merit in investigating the impact factors for the performance of GNN and I do like a more systematic approach. However, given the fact that GNN can be viewed as a function with the input of both structure and feature, it seems obvious that both feature and structure would affect the output/performance of GNNs. With this said, I am not convinced and not comfortable with the claim that topology is enough to explain node-level tasks (see [1] for an example).\\n- We would like to point the reviewer to [1] and [2], which show experimentally that node degree is highly indicative of GNN performance in many real-world datasets. [2] can even rigorously connect node separability (as a proxy for GNN performance) and node degree in certain stylized settings.\\n\\n> One of the claimed contribution is the so-called \\\"heterogeneity profiles\\\". I do not see a detailed introduction or discussion on this technique and why it is novel. Based on the description on Section 3, it seems a standard random experiments/k-fold validation. Please correct me if I am wrong\\n- We will provide a more explicit definition of heterogeneity profiles in the next version.\\n- We believe that there is a misunderstanding with our methodology. This is not k-fold validation: k-fold validation would randomly split the data and then report average accuracy on the test set. Instead, we look at individual graph-level accuracy.\\n\\n> The claimed research question investigates the factors that explain performance heterogeneity. However, the explained framework (tree mover distance) used in the paper is directly adopted from another paper. What is the new insight provided in this paper? In addition, there are many other distance metric such as FID can combine structure and feature. Why not consider those?\\n- Please note that while the TMD is indeed not novel (which is clearly stated in the paper), the notion of the class-distance ratio (CDR) is novel, as is the insight that CDR explains largely graph-level performance. We can think of the CDR as a notion of dataset separability derived from the TMD. Other graph distance measures could indeed also be used, but those generally scale even worse than TMD [3].\\n\\n> The paper tries to connect the experimental insight with graph rewiring and over-smoothing. The papers try to connect over-smoothing with the diffusion property of graph structure (fiddle eigenvalue of graph matrix). Despite being somewhat intuitive, it is not a strong explanation for over-smoothing as it is not clear how the diffusive property of graph structure would affect training or generalization. In addition, I think this diffusive property would largely affect node-level tasks. It is not entirely clear to me why this concept is applicable for graph-level tasks. Please explain. While I do think that selective graph rewiring could be a highlight of the paper, the paper does not go into detail in this regard. For example, how do you use the empirical result/theoretical result to obtain the proposed criteria for the selection?\\n- Over-smoothing is not a focus of this paper, but rather model and hyperparameter choices that account for performance heterogeneity in graph-level learning. While rewiring is used to mitigate over-smoothing, the spectral rewiring method that we consider (FoSR) only address over-squashing, not over-smoothing. \\n \\n> the presentation and organization of the paper need to be improved. I think the paper right now attempts to connect too many concepts and methods (performance heterogeneity, over-smoothing, graph rewiring e.t.c). I do admire the ambitious goal. However, in the current version of the paper, the connections among these concepts and methods are presented in a rather superficial way (this might be because of the page limit). I do encourage the authors to dive deeper into these connections as they are important for advancing GNN.\\n- We appreciate any feedback on improving the structure of the paper.\\n\\n[1] Liang, Langzhang, et al. \\\"Resnorm: Tackling long-tailed degree distribution issue in graph neural networks via normalization.\\\" arXiv preprint arXiv:2206.08181 (2022).\\n\\n[2] Li, Ting Wei, Qiaozhu Mei, and Jiaqi Ma. \\\"A metadata-driven approach to understand graph neural networks.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Chuang, Ching-Yao, and Stefanie Jegelka. \\\"Tree mover's distance: Bridging graph metrics and stability of graph neural networks.\\\" Advances in Neural Information Processing Systems 35 (2022): 2944-2957.\"}", "{\"comment\": \"I thank the author for the diligent response. However, many of my concerns remain. As such, I will keep the original score.\", \"additional_comment\": \"There is a significant gap between \\\"graph structure can be a good indicator or sufficient in style setting\\\" and \\\"graph structure is enough to explain node classification task\\\". One straightforward example would be to use the contextual stochastic block model where the node identified from this model would depend on both structure and node feature.\"}", "{\"summary\": \"This paper investigates the performance heterogeneity of message-passing and transformer-based architectures in graph-level tasks. Unlike previous studies that focused on node-level tasks, the authors find that graph topology alone does not fully explain heterogeneity. Instead, they establish a connection between class-distance ratios and performance heterogeneity using the Tree Mover's Distance. Building on these observations, the authors propose a selective rewiring approach and a heuristic for determining optimal GNN depths.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Performance heterogeneity is a well-recognized and valuable research problem in graph learning.\", \"The proposed selective rewiring approach is promising for addressing performance heterogeneity in graph-level tasks.\", \"The observation that optimal network depth depends on the graph\\u2019s spectrum is intriguing, and the subsequent heuristic method for selecting the number of GNN layers is validated by the experimental results.\"], \"weaknesses\": [\"The generalization capability of the proposed selective rewiring approach remains uncertain. While the motivation for this approach is empirically driven, the observations may be biased by the specific graph datasets tested. Would the conclusions hold on challenging open-source benchmarks, such as large-scale datasets in OGB (e.g., ogbg-ppa and ogbg-code2)?\", \"Given the proposed solutions, how can they be applied to new graph scenarios? For new graph datasets, is there a confidence measure for selective rewiring or heuristic GNN depth prediction?\"], \"questions\": \"This work addresses an important yet challenging research topic in the graph field. I appreciate the focus on identifying performance heterogeneity in graph-level tasks. However, my primary concern is ensuring the proposed solution\\u2019s generalizability. For further details, please refer to the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper delves into the phenomenon of performance heterogeneity in graph-level learning, emphasizing how variations in graph topology influence model effectiveness across various architectures. Central to the study is the introduction of heterogeneity profiles, a novel analytical tool designed to systematically evaluate performance disparities across graph-level tasks. These profiles reveal that performance heterogeneity is shaped by factors beyond mere topological characteristics. Building on the analysis of heterogeneity profiles, the research progresses by proposing a selective rewiring strategy. This strategy aims to optimize network architecture based on the spectral properties of graphs, positing that aligning these properties can mitigate the need for extensive hyperparameter tuning by standardizing the optimal depth of GNN layers across different datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of heterogeneity profiles as a tool in graph-level learning to analyze performance across graphs provides a new methodological avenue for studying GNNs.\\n\\n2. The selective rewiring approach offers a pragmatic solution to a common problem in GNN deployment, potentially simplifying the model training process.\\n\\n3. The experiments are well-designed, covering multiple datasets and configurations.\", \"weaknesses\": \"1. The paper does not provide a detailed description of how hyperparameters were tuned, only presenting the final hyperparameter results. Different hyperparameter settings, including adjustments to hidden dimensions, dropout rates, activation functions, and normalization techniques, could provide a stronger, more robust set of results.\\n\\n2. The study lacks a detailed comparison with other state-of-the-art methods that aim to address similar challenges in GNNs. While the paper proposes innovative strategies for improving graph-level task performance, such as selective rewiring and optimized network depth based on heterogeneity profiles, it lacks empirical evidence showing that these approaches achieve state-of-the-art results on 1 or 2 benchmark datasets. This omission could undermine the perceived effectiveness and practical relevance of the proposed methods.\\n\\n3. The datasets used in the study are relatively small in scale. Incorporating results from more extensive and challenging datasets, such as those from the OGB, would strengthen the validation of the techniques and enhance the paper\\u2019s impact.\", \"questions\": \"1. How might different settings for parameters affect the conclusions drawn from the study?\\n\\n2. In line 298, these 'difficult' graphs are nearly identical for GIN and GraphGPS. Given that Figure 3 provides quantitative metrics but does not specify which exact graphs are considered \\\"difficult\\\" across both architectures, could the authors clarify how they determined that the same graphs pose difficulties for both GIN and GraphGPS?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for providing a response to my questions. However, I did not see the new experimental results, nor did I see how different parameter settings might affect the conclusions of the study.\"}", "{\"summary\": \"The paper investigates the performance variability of GNNs in graph-level tasks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Identifying the reasons behind the high variance in performance across different models is crucial; however, this work fails to offer new insights into this issue.\", \"weaknesses\": \"1. The paper highlights the variance in GNN performance on graph-level tasks; however, I believe this contribution does not offer new insights to the community. It is already well-established that Graph Neural Networks (GNNs) exhibit performance inconsistencies and that the effectiveness of different models often varies across datasets [1, 2].\\nFurthermore, the paper fails to propose a concrete solution to address this phenomenon. Inconsistencies in evaluation across sections further undermine its findings, as I will elaborate in my next comment.\\n\\n2.\\nThe authors investigate the inconsistency of performance on graph-level tasks by examining a range of models and datasets. However, the presentation lacks clarity regarding the consistency of their selections. For instance, in Section 3.2, three datasets are analyzed using two GNN architectures\\u2014specifically, GCN and GraphGPS. In contrast, Section 3.3 shifts to the Mutag dataset and the GIN model, raising questions about why different datasets and models were chosen for these sections.\\nGiven that the paper's primary contribution aims to highlight an empirical phenomenon, I believe a more comprehensive evaluation is warranted, one that encompasses a broader array of benchmarks and various GNN architectures. Notably, the Mutag and Proteins datasets are recognized for their instability, as documented in previous studies [1]. The MUTAG dataset, in particular, is small and characterized by high variance, which has led to its declining usage in recent research.\\nAdditionally, in Section 4, GPS is not tested at all; instead, GAT is utilized, despite all sections focusing on the same claim of heterogeneity in results. This inconsistency in model evaluation detracts from the paper\\u2019s coherence and impact.\\nThe evaluation graph transformers focus solely on one type of graph transformer, GraphGPS, which is inadequate to substantiate the claim that \\u201cOur analysis suggests that both message-passing and transformer-based GNNs display performance heterogeneity in classification and regression tasks\\u201d (line 102). There exists a diverse range of graph transformers, as highlighted in studies such as [3, 4], which should be considered to strengthen the analysis.\\n\\n3. The paper suffers from poor writing, featuring numerous grammatical errors and incomplete sentences. For instance, refer to lines 250, 352, 168, 576, and 187. Additionally, some sentences lack clear connections to the surrounding text, such as the statement: \\\"Size generalization in GNNs has been studied in (Yehudai et al., 2021; Maskey et al., 2022; Le & Jegelka, 2024).\\\"\\nThe overall quality of English in the paper is inadequate and unprofessional, significantly detracting from the clarity and credibility of the research.\\n\\n8. Overall, I find it difficult to see how the content of the paper supports the claims made in the abstract.\\n\\n[1] A Fair Comparison of Graph Neural Networks for Graph Classification, Errica et al.\\n[2] Design Space for Graph Neural Networks, You et al., NeurIPS20.\\n[3] Do Transformers Really Perform Bad for Graph Representation?, Ying et al, 2021.\\n[4] Heterogeneous Graph Transformer, Hu et al., 2020.\", \"questions\": \"1. Can you explain the rationale behind the differing datasets and models used in Sections 3.2, 3.3. and 4?\\n2. While the paper highlights performance inconsistencies in GNNs, it does not present any concrete solutions to address this issue. What are your thoughts on proposing methodologies or frameworks to mitigate these inconsistencies in future work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper investigates performance heterogeneity in graph neural networks (GNNs), specifically focusing on message-passing (MPGNNs) and transformer-based architectures. It addresses the challenges in understanding model performance variation on individual graphs within datasets used for graph-level learning. To capture performance variations, the authors introduce heterogeneity profiles and leverage the Tree Mover\\u2019s Distance (TMD) to demonstrate that both topological and feature information influence performance heterogeneity. The study explores how class-distance ratios, graph rewiring, and network depth impact heterogeneity, proposing a selective rewiring method and a depth-selection heuristic based on spectral alignment. The experiments validate these techniques, showing improved performance on multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"I think the attempted research question is fundamental and important for advancing GNN\", \"I do like the overall approach which is systematic\"], \"weaknesses\": \"- My main concerns with respect to this paper are the contribution and novelty of the methodology and results. More specifically,\\n\\na. There is certainly merit in investigating the impact factors for the performance of GNN and I do like a more systematic approach. However, given the fact that GNN can be viewed as a function with the input of both structure and feature, it seems obvious that both feature and structure would affect the output/performance of GNNs. With this said, I am not convinced and not comfortable with the claim that topology is enough to explain node-level tasks (see [1] for an example).\\n\\nb. One of the claimed contribution is the so-called \\\"heterogeneity profiles\\\". I do not see a detailed introduction or discussion on this technique and why it is novel. Based on the description on Section 3, it seems a standard random experiments/k-fold validation. Please correct me if I am wrong\\n\\nc. The claimed research question investigates the factors that explain performance heterogeneity. However, the explained framework (tree mover distance) used in the paper is directly adopted from another paper. What is the new insight provided in this paper? In addition, there are many other distance metric such as FID can combine structure and feature. Why not consider those?\\n\\nd. The paper tries to connect the experimental insight with graph rewiring and over-smoothing. The papers try to connect over-smoothing with the diffusion property of graph structure (fiddle eigenvalue of graph matrix). Despite being somewhat intuitive, it is not a strong explanation for over-smoothing as it is not clear how the diffusive property of graph structure would affect training or generalization. In addition, I think this diffusive property would largely affect node-level tasks. It is not entirely clear to me why this concept is applicable for graph-level tasks. Please explain. While I do think that selective graph rewiring could be a highlight of the paper, the paper does not go into detail in this regard. For example, how do you use the empirical result/theoretical result to obtain the proposed criteria for the selection?\\n\\n- the presentation and organization of the paper need to be improved. I think the paper right now attempts to connect too many concepts and methods (performance heterogeneity, over-smoothing, graph rewiring e.t.c). I do admire the ambitious goal. However, in the current version of the paper, the connections among these concepts and methods are presented in a rather superficial way (this might be because of the page limit). I do encourage the authors to dive deeper into these connections as they are important for advancing GNN.\\n\\n\\n[1] \\\"Subgroup generalization and fairness of graph neural networks.\\\" Advances in Neural Information Processing Systems 34 (2021): 1048-1061.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for reading our submission. We politely disagree with the reviewer\\u2019s assessment of the paper. We believe that there are several misunderstandings regarding the motivation, methodology and contributions of our paper.\\n\\nWe maintain that the paper\\u2019s focus on individual performance in graph-level learning offers a new perspective on performance heterogeneity. To the best of our knowledge, the selective rewiring approach and heuristic for the GNN depth are novel. This was also acknowledged by the other reviewers.\\n\\nWe respond to a few of the points that the reviewer raised.\\n\\n> The paper highlights the variance in GNN performance on graph-level tasks; however, I believe this contribution does not offer new insights to the community. It is already well-established that Graph Neural Networks (GNNs) exhibit performance inconsistencies and that the effectiveness of different models often varies across datasets [1, 2]. Furthermore, the paper fails to propose a concrete solution to address this phenomenon. Inconsistencies in evaluation across sections further undermine its findings, as I will elaborate in my next comment.\\n- We strongly disagree with the notion that there are no new insights to be gained here and believe that there is a misunderstanding. In fact, both of the papers pointed to by the reviewer focus on dataset-level accuracy (i.e. average over all nodes in node-level tasks and average over all graphs in graph-level tasks). They then study how model performance varies between datasets. This is fundamentally different from the graph-level approach we take: we focus on how performance varies **within** datasets, not **between** them. As such, our approach is similar to what [1] have proposed in the vision space. To the best of our knowledge, no comparable studies have been conducted for GNNs.\\n \\n> The authors investigate the inconsistency of performance on graph-level tasks by examining a range of models and datasets. However, the presentation lacks clarity regarding the consistency of their selections. For instance, in Section 3.2, three datasets are analyzed using two GNN architectures [...]\\n- Many of the experiments are conducted on LRGB, which is a state-of-the-art graph learning benchmark. Some experiments involving the computation of the TMD are only performed on small data sets. As stated in section 3.3 this is because TMD does not scale to larger datasets. \\n- We do not believe that testing rewiring methods with a global-attention based model like GPS would be insightful because the computation graph here is learned by the model (via attention) and is therefore not limited by the topology (unlike in message-passing). Our analysis based on consensus dynamics does therefore not immediately apply here. We would further like to point out that we are not aware of any papers in the literature that combine rewiring approaches with global-attention based models, most likely because those are usually assumed to not suffer from over-smoothing or over-squashing.\\n- In the next version, we will extend our investigation of heterogeneity in section 2 to include additional transformer architectures. \\n\\n[1] Kaplun, Gal, et al. \\\"Deconstructing Distributions: A Pointwise Framework of Learning.\\\" The Eleventh International Conference on Learning Representations.\"}", "{\"metareview\": \"The paper studies performance heterogeneity GNNs including MPNNs and transformer-based models, with a focus on graph-level tasks. It introduces class-distance ratios (CDRs) derived from the Tree Mover\\u2019s Distance (TMD) to explain the heterogeneity. Two methods are proposed to address heterogeneity: selective graph rewiring and a heuristic for choosing GNN depth based on spectral properties. The results are on relatively small datasets, raising concerns about applicability to larger, more complex benchmarks like OGB. Overall, the experimental results are not convincing to the reviewers (missing baselines, inconsistencies in experimental setup). The presentation quality could be improved. For future work, the authors should expand the evaluation (larger benchmarks and baselines), and should make the experimental setup more consistent.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Z81b highlighted concerns about the generalizability of the selective rewiring approach. The authors promised additional results in future revisions but did not provide immediate evidence during the rebuttal. Reviewer pTz2 requested more details and study of hyperparameters. While the authors provided some clarifications, the response lacked sufficient empirical additions to fully address the concerns. Reviewer 2jpS issued a strong rejection, citing the inconsistency in methodology and insufficient novelty. Reviewer wPWz also claims that many of their concerns remain. Overall, the response from the authors did not adequately address the concerns.\"}" ] }
5AtlfHYCPa
HR-Extreme: A High-Resolution Dataset for Extreme Weather Forecasting
[ "Nian Ran", "Peng Xiao", "Yue Wang", "Wesley Shi", "Jianxin Lin", "Qi Meng", "Richard Allmendinger" ]
The application of large deep learning models in weather forecasting has led to significant advancements in the field, including higher-resolution forecasting and extended prediction periods exemplified by models such as Pangu and Fuxi. Despite these successes, previous research has largely been characterized by the neglect of extreme weather events, and the availability of datasets specifically curated for such events remains limited. Given the critical importance of accurately forecasting extreme weather, this study introduces a comprehensive dataset that incorporates high-resolution extreme weather cases derived from the High-Resolution Rapid Refresh (HRRR) data, a 3-km real-time dataset provided by NOAA. We also evaluate the current state-of-the-art deep learning models and Numerical Weather Prediction (NWP) systems on HR-Extreme, and provide a improved baseline deep learning model called HR-Heim which has superior performance on both general loss and HR-Extreme compared to others. Our results reveal that the errors of extreme weather cases are significantly larger than overall forecast error, highlighting them as an crucial source of loss in weather prediction. These findings underscore the necessity for future research to focus on improving the accuracy of extreme weather forecasts to enhance their practical utility
[ "Weather Forecast Dataset", "Extreme Weather", "Deep Learning", "Numerical Weather Prediction" ]
Accept (Poster)
https://openreview.net/pdf?id=5AtlfHYCPa
https://openreview.net/forum?id=5AtlfHYCPa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rMasUMQPwk", "r389JZ1OuR", "oZeAhagU3w", "nAFI95Mxuv", "fbjfEHik0y", "ZVexEHmt3W", "VelrX2N2fH", "PSfCyJjBF5", "KKPwgeiusd", "JEdwnXa2eD", "H8i8NKWzku", "BtnGWgwOtH", "AQKMwUjWOq", "8m4vCtVvKz", "6C8kwLwm8j", "4ydmhU3ohp", "1C0CRZU292" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732262200635, 1731172421014, 1732262451395, 1732261922765, 1732262308501, 1734722419416, 1733006268969, 1732262065280, 1733024239612, 1732859837773, 1732517816562, 1730573288171, 1737524067113, 1732859875416, 1732511048327, 1730000265437, 1730620438284 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_wJpu" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Area_Chair_vFA2" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_tGGo" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_kLu6" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10636/Authors" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_kLu6" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_tGGo" ], [ "ICLR.cc/2025/Conference/Submission10636/Reviewer_Jbrg" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Jbrg [Part 1]\", \"comment\": \"Thank you for very much your positive feedback ! We truly appreciate your insightful understanding and recognition of the key contributions of our study in terms of significance, comprehensiveness and open-source code and dataset! We have revised our paper according to your suggestions and provide our responses to address your concerns below.\\n\\n> **Question about clustering method**\\n\\n**A**: Thank you for your question. We already include the detailed introduction of the clustering method we used (DBSCAN) is in the end of the part \\\"NOAA Storm Prediction Center\\\" in section 3.2, \\nbut to make it clearer, we improve our writings for it and also put it here for your convenience.\\n\\n\\\"After extensive case studies, we determined that DBSCAN (Ester et al., 1996) is most suitable for this task, as illustrated in Figure 1. For each timestamp, user reports are treated as 2D points based on normalized latitude and longitude on the x and y axes. DBSCAN identifies clusters based on point density, forming a cluster if there are enough points in close proximity. We carefully tune the hyperparameters of DBSCAN to create more intuitive clusters and to filter out noisy points more accurately as shown in Figure 1. Noisy points are filtered out because they likely represent minor events or errors that are not significant enough to warrant creating a separate cropped area for evaluation.\\\"\\n\\n> **Question about model training**\\n\\n**A**: Thank you for your question. All models (Pangu, Fuxi, and our HR-Heim) were trained on HRRR data spanning the U.S. from January 2019 to June 2020, from scratch. They were trained under identical parameters and same level of model parameters, and no hyperparameter tuning was applied to HR-Heim. Furthermore, none of these models were fine-tuned on our HR-Extreme dataset, ensuring a fair basis for comparison and evaluation. This clarification has been added at the beginning of Section 4.3 in blue.\\n\\n> **Concern about clustering results and more analysis**\\n\\n**A**: Thank you for your question. We have verified our clustering results for finding the area extreme event by intensive case studies when we built our dataset. We carefully tuned the hyperparameters of it to make the results consistent with the recorded information. We have also added an additional statistical analysis of physical variables in different extreme weather events in appendix A1 showing the distinct characteristics of extreme events compared to normal cases.\\n\\n> **Question about the sentence: \\\"types of events without specific ranges or those not related to obvious variations in feature map predictions\\\"**\\n\\n**A**: Thank you for your question. This sentence describes our data cleaning process for entries from the NOAA Storm Events Database. To make it clearer, we have reorganize the sentences as the following and fixed it in Section 3.2 in orange . While most events include information on location and spatial range, some lack these critical details. Additionally, certain event types, such as avalanches and high surf, do not significantly impact the physical variables predicted by the models. To ensure greater accuracy, we have filtered out these events. \\n\\n>**Can data from different sources be used separately?**\\n\\n**A**: Yes! The index file generated along with the dataset allows users to easily retrieve information on event types, time spans, and locations for each event from any data source. However, only use a single data source would be incomplete, as the three data sources that we utilize each one contributes unique information on different types of extreme events with varying extents. (more details in Section 3.2)\"}", "{\"summary\": \"Extreme weather forecasting is a crucial problem for the whole world. With the rise of deep learning-based weather forecasting models, the effectiveness of them on extreme weathers are not well analyzed. This paper targets on providing a new benchmark for extreme weather forecasting. Authors employ the HRRR data and utilize the extreme events record in three sources to crop the extreme feature from the original HRRR dataset. Experiments are conducted with four baselines to show the performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Pros:\\n\\n1. Extreme weather forecasting evaluation is an important research problem. \\n2. The provided dataset introduces 17 extreme events, which are comprehensive. \\n3. Authors also release the code for generating the data.\", \"weaknesses\": \"Cons:\\n\\n1. It is not clear how the HR-heim model is trained. \\n2. Considering the dataset is a processed version of HRRR, it would be helpful to provide the geo-location of the extreme data to facilitate more diverse use from users. \\n3. While the dataset is valuable, there is almost no analysis are present, especially compared to the ERA5 dataset, which is not insightful.\", \"questions\": \"please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer tGGo [Part 1]\", \"comment\": \"Thank you for your positive feedback. We truly appreciate your insightful understanding and recognition of the key contributions of our study in terms of significance, comprehensiveness and a strong baseline model for weather prediction on HRRR data! We have revised our paper according to your suggestions and provide our responses to address your concerns below.\\n\\n> **More experiments with SOTA models and HR-Heim. Include prediction for more than a single hour ahead**\\n\\n**A**: Thank you for your suggestions. We have finished the experiment of NWP on HRRR original data, NWP model and HR-Heim on HR-Extreme for lead time from 1 to 4 and added to the end of Section 4.3 in blue. There is also additional statistical analysis in Appendix A.1. The experiments shows HR-Heim consistently outperform others in all lead times. Training and evaluating is highly resource-intensive and time-consuming, however, we want to note that the primary goal of this work is to address the gap in high-resolution datasets for extreme weather evaluation, encompassing comprehensive event types rather than HR-Heim model. We aim to highlight the significance of extreme weather prediction for future research, demonstrate the substantial room for improvement in the performance of current SOTA models. Then additionally, we present a model improved for high-resolution prediction, which achieves better results on HRRR data and our extreme weather dataset. This serves as strong a baseline model for future predictions on HRRR data and its extreme events. \\n\\n> **Evaluate HR-Heim on a similar extreme weather dataset**\\n\\n**A**: Thank you for your suggestion. The experiment of HR-Heim at different lead times are added to paper at the end of Section 4.3 in blue, showing its consistent superior performance. HRRR data offers significantly higher resolution than ERA5, while the dataset you referenced is built on ERA5. In contrast, HR-Heim is specifically optimized for high-resolution prediction on HRRR and trained on HRRR data. Nonetheless, our dataset addresses the gap in high-resolution datasets for extreme weather, with its primary contribution being its application for model evaluation.\"}", "{\"title\": \"General Comments and Revision Summary\", \"comment\": [\"We thank all reviewers for your insightful reviews and helpful suggestions, and thank all reviewers for their recognition of the importance, comprehensiveness and contributions of our work. The revised version of our paper is submitted and we give a summary of our revision of the paper below,:\", \"According to the reviewer's suggestion, we have added additional statistical analysis of physical variables in normal and extreme events showing the ability of our dataset. We have also added additional experiments of NWP and HR-Heim at more lead times, the results show HR-Heim consistently outperform NWP model at different lead times.\", \"Many reviewers have concerns about the model training, we have clarified that all deep learning models (Pangu, Fuxi and HR-Heim) are trained with same level of parameters, same hyperparameters. And none of them are fine-tuned on HR-Extreme, enabling fair comparison. This clarification is also added in the paper.\", \"Some reviewers have questions about more details of extreme events of our dataset, we have clarified that all kinds of details of extreme events can be found in the index file generated along with the dataset. This is clarification is also added to the paper.\", \"We have improved our writing to present more clearly our primary goals, contributions and significance.\", \"At last, I want to thank all reviewers again for your constructive feedback that makes this paper better.\"]}", "{\"title\": \"Response to Reviewer kLu6 [Part 1]\", \"comment\": \"Thank you for your review and we understand your feedback. It seems there may have been some misunderstanding regarding the usage and purpose of our dataset, as well as the settings of the HRRR data.\\n\\n> **Concerns about the significance of our work**\\n\\n**A**: \\n- **Clarification of Dataset Purpose**: We understand that you may view our work as a dataset for training models to better predict extreme events. However, our work is actually a dataset for evaluating the performance of models on recorded extreme events based on HRRR data. We understand your concern about the uncertainty of extreme weather and may think that our extreme dataset is built on a simulator. However, our dataset is systematically constructed from recorded extreme events and is primarily intended for evaluation. We have also stated in the future work in our paper that it is only a possibility for future models to fine-tune on our dataset to improve their performance.\\n\\n- **Contributions and Significance**: The primary goal of this work is to address the gap in high-resolution datasets for extreme weather evaluation, encompassing comprehensive event types. We provide such a dataset to evaluate the ability of SOTA models on recorded extreme events. We aim to highlight the significance of extreme weather prediction for future research, demonstrate the substantial room for improvement in the performance of current SOTA models. Additionally, we present a model improved for high-resolution prediction, which achieves better results on HRRR data and our extreme weather dataset. This serves as a baseline model for future predictions on HRRR data and its extreme events.\\n\\n- **Correctness of our Dataset and NWP Model Usage**: You might have been misled by the HRRR dataset settings. The analysis at lead time 0 (often referred to as f00) is a blend of the model's background (a short-term forecast from the previous cycle) and recent observations, integrated through data assimilation techniques. This process incorporates various observational data, such as from aircraft, radar ..., to produce an accurate representation of the current atmospheric state. Therefore, while the f00 output is heavily informed by actual observations. For lead times greater than 0 (e.g., f01, f02), the outputs are forecasts generated by the model based on the initial analysis [3]. And our dataset is built on data at lead time 0, the predictions of NWP uses lead time at least 1, which enables fair comparison to other models.\\n\\n- **Importance of Extreme Weather Nowcasting**: We want to state that extreme weather nowcasting is also a crucial task in practice, such as work NowcastNet[1] and DGMR[2] published on Nature. We have already considered the unusual atmosphere states before and after the extreme events, therefore we incorporate several hours of atmospheric state data before and after the recorded events to include the comprehensive progression of an extreme event. On top of that, our open-source code is designed to allow users to adjust the temporal dimension, enabling arbitrary lead-time predictions or extending the atmospheric state analysis before and after the event. This flexibility supports both nowcasting and longer forecasting evaluation. \\n\\n>\\n[1] Zhang, Y., Long, M., Chen, K. et al. Skilful nowcasting of extreme precipitation with NowcastNet. Nature 619, 526\\u2013532 (2023). \\n[2]Ravuri, S., Lenc, K., Willson, M. et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 597, 672\\u2013677 (2021)\\n\\n[3] How do I download 15-minute HRRR data: https://github.com/blaylockbk/Herbie/discussions/67#discussioncomment-2537530\\n\\n> **Question about model training**\\n\\n**A**: Thank you for your question. All models (Pangu, Fuxi, and our HR-Heim) were trained on HRRR data spanning the U.S. from January 2019 to June 2020, from scratch. They were trained under identical parameters and same level of model parameters, and no hyperparameter tuning was applied to HR-Heim. Furthermore, none of these models were fine-tuned on our HR-Extreme dataset, ensuring a fair basis for comparison and evaluation, which means none of the models have seen the extreme event in training. This clarification has been added at the beginning of Section 4.3\\n\\n> **Concerns about Figure 4**\\n\\n**A**: The intention is not to compare the models\\u2019 abilities to predict extreme events but to validate the dataset\\u2019s capability in identifying high-error regions associated with extreme weather. The purpose of Figure 4 is to demonstrate that our dataset effectively captures the regions with the greatest prediction loss for each model. These regions correspond to extreme weather events included in our dataset, which was constructed using recorded data and manual filtering. We have also added additional analysis of physical variables in appendix A.1.\"}", "{\"metareview\": \"The authors introduce a high-resolution weather dataset for extreme weather based on the HRRR public data from NOAA (numerical weather prediction). They do this by processing HRRR through unsupervised clustering and filtering and creating a comprehensive benchmark of many extreme weather events over the US. The data is in ML training ready format and open-sourced. They further introduce a new DL baseline model that outperforms other SOTA DL models on their dataset.\", \"strengths\": \"Timely dataset for extreme weather forecasting was acknowledged as a strong contribution by all reviewers, dataset is comprehensive, results show the shortcomings for current DL models\", \"weaknesses\": [\"The HR-Heim model contribution as a baseline model is not comprehensive, technical clarity can be improved\", \"The paper could be improved by primarily more discussion/text based on the reviewer's comments and my own reading. For example:\", \"analysis on ERA5 - The author's rebuttal is satisfactory in that ERA5 is a coarse (but global) dataset. It could be beneficial to show this shortcoming of ERA5 to motivate the new dataset - missing extreme events, smooth/coarse prediction of events, etc through similar visual images.\", \"NWP comparisons - discussion on why the HR-Heim model is better than NWP given that NWP is used to create the dataset in the first place - this could be a discussion on details of the HRRR dataset (analysis vs reanalysis states), how the NWP model works in the forecasting mode, what additional information the HR-Heim model sees (as well as other DL models) and why you expect this setting is close to the actual forecasting setting for NWP prediction centers.\", \"HR-Heim vs other models - discussion on why HR-Heim is a better architecture than the other models, given that they are all trained on the same dataset - what parts of HR-Heim contribute to high resolution prediction and what parts could be switched with other backbones. Typically, these are answered systematically with ablations but since the authors reduce this contribution in favor of the dataset primarily, a discussion will be beneficial.\", \"Model training discussion - while the author's rebuttal clarifies partially how the models were trained, it is beneficial to also state training times, training resources. This also related to the previous point where the compute/memory costs comparisons across models will inform the reader as to the actual cost of getting the increased accuracy on extreme events.\", \"Minor: Fig resolution for Fig 1 can be improved and in general font sizes for figures can be bigger to allow for easy readability.\", \"The above improvements (primarily text changes) will help cater to the broader ML for science ICLR audience.\"], \"additional_comments_on_reviewer_discussion\": \"There were multiple concerns on details about how the models were trained which the authors have answered mostly.\\nThe other questions (see improvement list above) were also raised and answered in the rebuttal but I believe they should be included as a discussion in the text of the paper as well.\\n\\nwJpu had the least opinion and raised 3 weaknesses on model training, geo-location info in the data, and comparisons to ERA5. I believe the authors have answered the first two (see above for partial clarification on model training); for ERA5 comparisons, there could be more discussion - see improvement list above. \\n\\nThe reviewers were very positive on the dataset contribution. I weight some of the technical discussion gap more heavily for a complete understanding of the dataset as well as models used. I believe they can be addressed with textual changes and more information (no new experiments) which the authors mostly have in their rebuttal and hence still lean towards acceptance.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"The updated experiments involving HR-Heim seem to answer my concerns. I am updating my score accordingly.\"}", "{\"title\": \"Response to Reviewer wJpu [Part 1]\", \"comment\": \"Thank you for your positive comments, and thank you for your recognition of the paper\\u2019s importance, comprehensiveness and open-source code and dataset! We have revised our paper according to your suggestions and provide our response to address your concerns below.\\n> **Question about how HR-Heim is Trained**\\n\\n**A**: Thank you for your question. All models (Pangu, Fuxi, and our HR-Heim) were trained on HRRR data spanning the U.S. from January 2019 to June 2020, from scratch. They were trained under identical parameters and same level of model parameters, and no hyperparameter tuning was applied to HR-Heim. Furthermore, none of these models were fine-tuned on our HR-Extreme dataset, ensuring a fair basis for comparison and evaluation. This clarification has been added at the beginning of Section 4.3 in blue.\\n\\n> **Questions about more metadata in dataset**\\n\\n**A**: Thank you for your question. This information is already included in the index file. Alongside the dataset generated by our code. The index file contains details on location, range, type of extreme weather, and time span by integrating information from three data sources. Users can easily convert this information from the index file to longitude and latitude coordinates with our open-source code. We have added this clarification at the beginning of Section 3.3 in blue.\\n\\n> **Questions about more analysis and comparison to ERA5**\\n\\n**A**: Thank you for your suggestion. We have also added an additional statistical analysis of physical variables in different extreme weather events in appendix A1 showing the distinct characteristics of extreme events compared to normal cases. In terms of ERA5, we have considered ERA5 in our research, however, it is not suitable for building our extreme dataset. ERA5 data has a resolution of 31 km, while HRRR data provides a finer resolution of 3 km. Due to this difference, many extreme events with limited spatial range are more clearly represented in HRRR data but appear vague or may be entirely absent in ERA5. It is more accurate and beneficial for future work to develop our dataset using HRRR data.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for taking the time to reassess your concerns regarding our work, we are grateful for your constructive engagement!\"}", "{\"title\": \"Response\", \"comment\": \"Thank you once again for your valuable comments and suggestions, which have greatly improved our paper. We have carefully addressed your concerns by revising papers and making clarifications. Could you kindly let us know if you have any further concerns or suggestions?\"}", "{\"title\": \"Thanks\", \"comment\": \"We are very glad that our response has addressed your concerns! And thank you very much for your recognition of the value of our paper!\"}", "{\"summary\": \"The authors present a new dataset of labelled extreme weather events over the continental US based on a high resolution (3km) numerical forecast product. They compare the events from a numerical weather prediction (NWP) model as well as two baseline ML based weather models and a newly proposed variant. The authors claim improved skill in their new variant compared to the other baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The constructed dataset could provide a useful evaluation of extreme events in new ML based weather prediction models which tend to be evaluated on larger scale statistics, potentially hiding biases in such events. It is based on an operational high-resolution dataset with a mixture of automated and manual labelling.\", \"weaknesses\": \"There are a number of technical and fundamental weaknesses that undermine the above strengths.\", \"major_issues\": \"1) Extreme events are, by definition, rare. While a database of such events derived from a high-resolution reanalysis product could provide a useful starting point for evaluation, the specific events are much less useful without a probabilistic understanding or measure of their likelihood at different lead times. i.e. It is probably more likely that any of the given events *wouldn't* have happened given a similar atmospheric state if the conditions were encountered again. Without a probabilistic understanding of the probability of an event (based on lots of comparable events that weren't extreme), this dataset has very little value as presented.\\n 2) This leads to my second concern regarding the evaluation setup. Given the dataset extracts the atmospheric state at t=0, -1 and -2 hours, I presume the evaluations happen from an atmospheric state of t=-2 hours and run forward? This is no longer forecasting, but nowcasting and quite a different task. Since the atmospheric state already has the extreme event and the model just needs to advect it correctly. I have no idea how the NWP model is compared in this setting since presumably the authors don't run this explicitly with the extracted state? Or perhaps they do? Also, since the authors use the same NWP as was used to create the HRRR dataset, why doesn't it perform as well as the other models? At what resolution are the models run, are they run globally, over the US, or only over the event region? Are the global Fuxi and Pangu models retrained for these regions?\\n 3) Related to this, to what extent is this dataset used to train the different models? Does HR-Helm get to see some or any extreme event data during training? Given the issues described in (1), how do you avoid overfitting?\\n\\nOne more minor issue is that I would like to see Figure 4 presented for the same day for all 4 models, with a separate figure for the other day for all 4 models in the appendix. Currently it's impossible to fairly compare the skill of the models (although it seems like the NWP already does better than HR-Helm across the two examples).\", \"questions\": \"Please provide specific responses to each of the concerns and questions raised above in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response\", \"comment\": \"Thank you once again for your valuable comments and suggestions, which have greatly improved our paper. We have carefully addressed your concerns by revising several sections and conducting additional experiments and analyses. Could you kindly let us know if you have any further concerns or suggestions?\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to respond to my concerns. The clarifications on the task, and the setup of the NWP model have alleviated many of my concerns. I also appreciated the clarity on the training of each of the baseline models and hyperparameter tuning. I will update my score accordingly\"}", "{\"summary\": \"This paper introduces a high-resolution dataset, called HR-Extreme, for numerical weather forecasting under extreme weather conditions, an area often overlooked in weather forecasting literature. The authors also present a baseline deep learning model alongside the dataset, called HR-Heim, which outperforms state-of-the-art weather models in extreme weather conditions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The authors provide a high-resolution dataset for numerical weather forecasting under extreme weather conditions. The need for such dataset has been evident for some time, as highlighted by the low performance analysis of SOTA weather forecasting models.\", \"The construction of the dataset is well-motivated, and the authors provide a clear and thorough explanation of both the data collection and construction processes.\", \"Additionally, the authors provide a baseline deep learning model called HR-Heim, which is inspired by a SOTA numerical weather forecasting model, to specifically excel and outperform SOTA models under extreme weather conditions.\"], \"weaknesses\": [\"In Section 2.2, the authors discuss some datasets that share similarities (to a certain extent) with the proposed one. It would have been beneficial to illustrate the performance of the SOTA models and the proposed HR-Heim on some of these datasets. Specifically, the last dataset introduced by Liu et al., which also includes certain extreme weather conditions, could have been included in the experiments to support the need for HR-Extreme and support the performance of HR-Heim. Given that HR-Heim outperforms SOTA methods in HR-Extreme, we would expect to see similar results on other datasets with extreme weather conditions.\", \"As the authors also state as a limitation, this kind of study should include more than just a single-step prediction analysis. Given the increased difficulty of predicting extreme weather events in the long term, an accuracy comparison between HR-Heim and SOTA methods across varying time horizons could be valuable.\"], \"questions\": [\"Could you extend the dataset analysis to include predictions for more than a single hour ahead?\", \"As an extension to the previous question, could you analyze and compare the performance of state-of-the-art models and HR-Heim for both short-term and long-term forecasting?\", \"Could you further evaluate the performance of HR-Heim on a similar extreme weather dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work utilizes high-resolution HRRR data to create the HR-Extreme dataset, which encompasses a comprehensive set of 17 extreme weather types. The aim is to provide a more specialized dataset for evaluating the performance of weather forecasting models. To achieve this goal, the authors employed unsupervised clustering and manual filtering methods to develop a complete feature map of extreme events in a machine learning-ready format for the continental United States. The dataset was then used to assess the 1-hour forecasting capabilities of existing medium-range prediction models in comparison to the HR-Heim model proposed in this study.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. **Significance**: It fills the gap in benchmarking for extreme weather assessment in deep learning-based medium-range weather forecasting tasks.\\n\\n2. The dataset is clearly introduced, and it will be fully open-sourced.\\n\\n3. The authors present comprehensive experiments and baseline results.\", \"weaknesses\": \"1. The description of the dataset generation process lacks clarity in some areas: a more detailed introduction of the clustering method is needed, including how records from different sources are handled and the hyperparameters of the algorithm.\\n\\n2. There are also unclear aspects in the experimental description: \\n a) The baselines are models trained on globally coarse-resolution grids; Was there any further fine-tuning on this dataset? What preprocessing steps were taken? \\n b) What is the training strategy for HR-Heim? Does it use the same hyperparameter settings on the original dataset and HR-Extreme?\", \"questions\": \"1. Was the result of the clustering algorithm verified by domain experts? Is there corresponding uncertainty detection and assessment?\\n\\n2. What is the basis for the \\\"types of events without specific ranges or those not related to obvious variations in feature map predictions\\\"?\\n\\n3. Can data from different sources be used separately?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
5AtHrq3B5R
PnP-Flow: Plug-and-Play Image Restoration with Flow Matching
[ "Ségolène Tiffany Martin", "Anne Gagneux", "Paul Hagemann", "Gabriele Steidl" ]
In this paper, we introduce Plug-and-Play (PnP) Flow Matching, an algorithm for solving imaging inverse problems. PnP methods leverage the strength of pre-trained denoisers, often deep neural networks, by integrating them in optimization schemes. While they achieve state-of-the-art performance on various inverse problems in imaging, PnP approaches face inherent limitations on more generative tasks like inpainting. On the other hand, generative models such as Flow Matching pushed the boundary in image sampling yet lack a clear method for efficient use in image restoration. We propose to combine the PnP framework with Flow Matching (FM) by defining a time-dependent denoiser using a pre-trained FM model. Our algorithm alternates between gradient descent steps on the data-fidelity term, reprojections onto the learned FM path, and denoising. Notably, our method is computationally efficient and memory-friendly, as it avoids backpropagation through ODEs and trace computations. We evaluate its performance on denoising, super-resolution, deblurring, and inpainting tasks, demonstrating superior results compared to existing PnP algorithms and Flow Matching based state-of-the-art methods. Code available at https://github.com/annegnx/PnP-Flow.
[ "Plug-and-Play", "Flow Matching", "image restoration", "inverse problems", "generative modeling" ]
Accept (Poster)
https://openreview.net/pdf?id=5AtHrq3B5R
https://openreview.net/forum?id=5AtHrq3B5R
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zg3oIm30kI", "s6m3bEe9kl", "qIjMtA9fjw", "plYd0lfwgU", "pAqy1hMrqv", "moTcdqVqXY", "kjYSpIGkTD", "e2bcen88PY", "Qh9uOCKUHg", "IFPB0tusgK", "HPG4zHydaq", "Do6PvhKsmw", "B9UNg1St4B", "ADhg6AAoq4", "8H3CUel2N9", "7lytLhJh2v", "7kGRbLYNUZ", "514qaZ9ynp", "0uyVhoYBPi" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730713683294, 1731495249127, 1732110299538, 1732614369984, 1733296457900, 1734684973870, 1732110435005, 1730647196328, 1733286832540, 1730943735802, 1732874195298, 1737523815843, 1730692103892, 1732874286770, 1732874360174, 1732110513025, 1732110638942, 1732530997981, 1732109848935 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7087/Reviewer_n2r9" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Reviewer_VJZf" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Area_Chair_62aJ" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Reviewer_VJZf" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Reviewer_pWBL" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7087/Reviewer_fVXi" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ], [ "ICLR.cc/2025/Conference/Submission7087/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a zero-shot method (PnP-flow) for Inverse problems based on a pre-trained flow-matching (FM) model. The method combines the plug-and-play (PnP) framework with flow matching by alternating between gradient descent steps on the data-fidelity term, reprojections onto the learned FM path, and denoising. PnP-flow achieves state-of-the-art (SOTA) results compared to existing PnP and flow-based algorithms across different image inverse problems.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The method is training-free which makes it computationally practical.\\n2. The method achieves SOTA results compared to existing flow-based methods.\", \"weaknesses\": \"1. My major concern is the lack of comparison to recent zero-shot methods based on a pre-trained diffusion model such as DDNM [1] and DPS [2].\\n2. The proposed method is non-blind (assume the full knowledge of the degradation model) which limits its applicability.\\n\\n\\n[1] Wang et al. Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model. ICLR 2023\\n\\n[2] Chung et al. Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR 2023\", \"questions\": \"1. Could you add comparisons with [1] and [2], or explain why those comparison are missing?\\n2. Could you comment on the potential applicability/extension of your method to the blind case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarification for rebuttal\", \"comment\": \"Thank you for your review. We will address your comments on the numerical section more thoroughly in our next response. It seems that you found certain parts of the paper unclear, while the other reviewers considered the presentation to be good. Could you kindly provide specific points where the explanation was unclear, and indicate what additional details you believe are necessary? For reference, the function F (which is the data-fidelity term) is defined on line 45, and the steps of the algorithm are detailed in Section 3.2. We look forward to making the necessary revisions to enhance the clarity of the paper.\"}", "{\"title\": \"Response to questions\", \"comment\": \"Regarding Weakness:\\nW1 and W2. Since all other reviewers found the presentation good (rated 3), we would appreciate a clarification on what parts you find need more explanation. Without your feedback, we cannot improve the presentation.\", \"regarding_the_questions\": \"Q1. We implemented a super-resolution task with a Laplacian noise degradation model (which is heavy tailed and therefore challenging). Some of the Flow-Matching based methods were not applicable as they are designed for Gaussian noise (D-Flow, OT-ODE), but we benchmarked against PnP Diffusion, PnP GS, Flow priors. Evaluation of the methods on 100 test images are reported in the Table below and in Appendix A.11 (Table 9) with more details. \\n| **Method** | **PSNR ($\\\\sigma=0.1$)** | **SSIM ($\\\\sigma=0.1$)** | **LPIPS ($\\\\sigma=0.1$)** | **PSNR ($\\\\sigma=0.3$)** | **SSIM ($\\\\sigma=0.3$)** | **LPIPS ($\\\\sigma=0.3$)** |\\n|--------------------|-------------------------|--------------------------|--------------------------|-------------------------|--------------------------|--------------------------|\\n| **Degraded** | 8.82 | 0.086 | 0.845 | 8.76 | 0.040 | 0.865 |\\n| **PnP-Diff** | *29.67* | *0.866* | **0.059** | 24.82 | *0.730* | 0.268 |\\n| **PnP-GS** | 28.50 | 0.828 | 0.079 | *25.31* | 0.725 | *0.185* |\\n| **Flow-Priors** | 27.53 | 0.721 | 0.102 | 21.72 | 0.454 | 0.376 |\\n| **PnP-Flow (Ours)**| **30.30** | **0.895** | *0.063* | **26.50** | **0.809** | **0.108** |\\n\\n\\nQ2. We added DDRM and DPS to our benchmark. For the results, see Appendix A.13 (Table 10&11).\\n\\nQ3. While we agree that this would be a very interesting experiment, we are unfortunately unaware of any publicly available unconditional pretrained flow matching model for ImageNet. Training one from scratch would require resources beyond our current capacity.\\n\\nQ4. Zooming in the images of Fig. 3 and 4 in the paper or Fig. 7 in the appendix, one can actually see important visual differences, which are discussed in Section 5.3. Note that none of the competing methods performs as consistently across all tasks as our method. \\n\\nQ5. In the revised version of the paper, we conduct new blind-restoration experiments. We adapt our method PnP-Flow to cases of unknown blur or unknown mask without any additional data, yielding excellent visual results. Details and results have been added in Appendix A.14.\\n\\nQ6. We did not observe failure cases for our method. As opposed to the benchmarking methods, our method is not prone to artifacts and other kinds of failures, although it tends to produce averaged results (slightly over-smoothed), consistent with its interpretation as a minimum-mean-square-error (MMSE) estimator. Interestingly, a competing method like OT-ODE may generate more realistic textures but often produces highly artifacted and distorted images, as shown in Figure 7 in the appendix.\\n\\nQ7. Benchmarks were conducted exclusively for Flow-Matching-based methods, as they share the same network architecture. A comparison with diffusion methods would not be relevant here.\"}", "{\"comment\": \"Thanks a lot for your response. I'm convinced by the method proposed by the authors. In particular, the proposed method is more efficient than the previous method. In addition, it generalizes well to several image restoration tasks. Thus, I decided to keep my rating of this paper.\"}", "{\"title\": \"Contributions Summary\", \"comment\": \"For all the interested readers we list our contributions below:\\n\\n1) We proposed a Plug and Play version of Flow Matching, which introduces a time dependent denoiser based on the velocity from flow matching. \\n\\n2) We compared extensively against other flow matching (and diffusion) methods, which did not have available code. We make the comparisons public. \\n\\n3) We show that our proposed method has theoretical advantages over diffusion (straight paths), and practical advantages (non Gaussian latent). \\n\\n4) During the rebuttal, we also added more diffusion baselines, and showed that our method can be applied to blind inverse problems and non-Gaussian noise.\"}", "{\"metareview\": \"Reviewers agree with the practicality of the proposed method and its novelty.\\nReviewers pWBL and n2r9 had some concerns about the validation of the model, which the authors addressed and included in their rebuttal. But the reviewer did not further comment on those results.\\nReviewer pWBL made further comments about the text clarity and this was again addressed by the author.\\nGiven that the authors responded to reviewer comments and other reviewers were happy with the paper's contribution and evaluation, this paper is recommended for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers were not responsive to author feedback.\"}", "{\"title\": \"Response to questions\", \"comment\": \"Q1. We added comparisons with DPS and DDRM in Appendix A.13 (Table 10&11). DDNM seems like an very interesting approach for future research: indeed, we could use a similar null space decomposition and only perform our proposed PnP Flow within this null space. We add a discussion on DDNM and related diffusion methods in the appendix, see Appendix A.12.\\n\\nQ2. Yes, our method is adaptable to the blind case and we thank the reviewer for this interesting suggestion. We demonstrate this on a preliminary experiment on blind deconvolution and blind inpainting where we only assume to have access to the observation and the pre-trained model, without additional data. Then we iterate (over time) between our algorithm steps and update steps on a randomly initialized kernel/mask. We see that the reconstruction is of higher quality than the blurry measurement. See the promising results in Appendix A.14.\"}", "{\"summary\": \"In this paper, the authors proposed a plug-and-play image restoration method based on flow matching. The reformulation starts from the forward-backward splitting algorithm, where the proximal step is replaced by a denoising step to form the plug-and-play forward-backward splitting algorithm. The authors insert a specific flow matching method, namely straight-line flows into the PnP-FBS framework due to the computation efficiency of the straight-line flows. Formally, the PnP flow matching algorithm consists of three steps: a gradient step on the data fidelity term, an interpolation step, and a PnP denoising step that is specifically designed to denoise inputs drawn from the straight path.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. A new plug-and-play method based on flow matching is proposed in this paper.\\n\\n2. The paper is well-written.\\n\\n3. The derivations in this paper are rigorous.\\n\\n4. The computational complexity and memory footprint of the proposed method is lower than the previous methods due to the careful design.\", \"weaknesses\": \"1. The restored images seem to be over-smoothed.\", \"questions\": \"1. Please explain why the computational complexity and memory footprint of the proposed method is lower than the previous method. Is it due to the design of the model or the choice of the straight-line flow?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Summary\", \"comment\": \"The review period is now over, and there unfortunately has not been any discussion regarding our paper, although we really tried to engage one. We believe that we have solid contributions, and adressed the reviewers concerns.\\nMany thanks to the reviewer, who gave already score 8 and underlined again that she/he believes in our work..\"}", "{\"summary\": \"This paper introduces the PnP-Flow Matching algorithm for addressing imaging inverse problems, including denoising, super-resolution, deblurring, and inpainting. The method combines the Plug-and-Play (PnP) framework with Flow Matching (FM) models by using a time-dependent denoiser to tackle image restoration tasks. Specifically, the algorithm alternates between gradient descent on a data fidelity term, reprojection onto a flow matching path, and denoising. The experiments demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Low memory usage, making it suitable for high-resolution images.\\n2. Consistently performs well across multiple tasks, showing stable PSNR and SSIM improvements.\", \"weaknesses\": \"1. The writing quality needs Improvement. Certain explanations lack clarity, particularly in describing the algorithmic process, e.g., the function F.\\n2. The details of the proposed method are insufficient.\\n3. The experiment section should be improved. Please refer to the details below.\", \"questions\": \"1. The formula \\\"y = noisy(Hx)\\\" uses a general definition for the noisy function. It would be helpful if the paper and experiments explored multiple types of noise to assess the method\\u2019s robustness.\\n\\n2. In Tables 1 and 2, the comparison is limited, particularly with only one diffusion-based method, PnP-Diff, which is a workshop paper, not a main conference paper. The authors should include comparisons with more diffusion-based methods, such as DPS, DeqIR, and DDRM, to provide a fuller view of how their method performs relative to the latest diffusion techniques.\\n\\n3. The authors could enhance the evaluation by including the ImageNet dataset. For the denoising and deblurring tasks. Testing the method across various noise levels and degrees of blur on a large, diverse dataset like ImageNet would offer more insight into how well the algorithm handles different types of degradation.\\n\\n4. In Figure 3, the visual results do not show a significant improvement over other methods (e.g., in the last row), even though the PSNR scores are higher. \\n\\n5. For real-world data with unknown degradation, it would be important to understand how well this method generalizes. \\n\\n6. It would strengthen the paper if the authors included examples of failure cases. \\n\\n7. In Table 3, not all methods are compared for computational time and memory usage. Including all relevant methods in this comparison would give a clearer picture of how the proposed algorithm stacks up in terms of efficiency across different benchmarks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer pWBL,\\n\\nthank you for your review. We appreciate your feedback. We carefully revised our manuscript according to your concerns. We hope that the revised version **meets your expectations**. If you have any **remaining concerns**, please let us know as soon as possible. If not, we would kindly invite you to consider **raising the score**.\\n\\nSincerely, the authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes to use flow matching in the plug-and-play framework for image restoration. The key is to use FM model as the denoisier. To avoid the numerical challenges, it integrates the implicit FM prior into a custom denoisier.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1, It proposes a design a time-dependent denoiser based on a pre-trained velocity field v learned through Flow Matching\\n\\n2, This denoiser is integrated into an adapted Forward-Backward Splitting PnP framework that cycles through a gradient step on the data-fidelity term, an interpolation step and a denoising step\\n\\n3, Being computationally efficient and memory-friendly via the use of ODE\", \"weaknesses\": \"1, Why the percpetual metrics are missing? From the visual results, it also seems that the results tend to be blurry. What\\u2019s the underlying reason? Is it due to the gradient step or the interpolation step, or something else?\\n\\n2, In addition, one of the advantages of these generative method is its high perceptual quality, but this method seems to have achieved good distortion performance. How about the results of employing the same end-to-end U-Net model as a simple baseline (for example, using the L1 loss)?\\n\\n3, Can you visualize all the intermidate resutls of all three steps for all time steps? It could better help readers understand the method.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer n2r9,\\n\\nthank you for your review. We appreciate your feedback. We carefully revised our manuscript according to your concerns. We hope that the revised version **meets your expectations**. If you have any **remaining concerns**, please let us know as soon as possible. If not, we would kindly invite you to consider **raising the score**.\\n\\nSincerely, the authors\"}", "{\"title\": \"Reminder\", \"comment\": \"Dear reviewer fVXi,\\n\\nthank you for your review. We appreciate your feedback. We carefully revised our manuscript according to your concerns. We hope that the revised version **meets your expectations**. If you have any **remaining concerns**, please let us know as soon as possible. If not, we would kindly invite you to consider **raising the score**.\\n\\nSincerely, the authors\"}", "{\"title\": \"Response to questions\", \"comment\": \"Q1. We computed the LPIPS metric, see Appendix A.10. We have competitive performance, with no method consistently leading the LPIPS metric. Our method seems to be more on the \\u201cdistortion\\u201d side of the perception/distortion tradeoff. As shown in [4], a method cannot consistently yield high perception and low distortion images, therefore one should generally not expect one method to beat all the others in both perceptual and distortion metrics.\\n\\nQ2. We think this is an interesting suggestion, however in this paper we want to focus on pretrained flow matching/diffusion/denoising models since they do not rely on paired data and are not task-specific. If we understand your suggestion correctly, a U-Net would need to be retrained for changes in the forward operator or degradation noise model. \\n\\nQ3. We appreciate the suggestion. A new visualization can be found in Appendix A.9, Figure 13.\\n\\n[4] Blau, Michaeli, The perception-distortion tradeoff, CVPR 2018\"}", "{\"title\": \"Response to questions\", \"comment\": \"Q1. Indeed, our images tend to be slightly oversmoothed, but we do not see it as a weakness, but rather inevitable due to the distortion perception tradeoff. Our method seems to be more on the \\u201cdistortion\\u201d side of the perception/distortion tradeoff (implying high PSNRs and smoother images). As shown in [4], a method cannot consistently yield high perception and low distortion images, therefore one should generally not expect one method to beat all the others in both perceptual and distortion metrics.\\n\\nQ2. Indeed, our method is the cheapest among the benchmarked flow matching methods. This is because we only need (few) evaluations of the velocity field. D-Flow needs to backpropagate through the ODE solution (very memory intensive) and flow priors needs to compute the trace of the flow\\u2019s jacobian. OT-ODE requires to solve a linear inverse problem at each iteration.\\n\\n[4] Blau, Michaeli, The perception-distortion tradeoff, CVPR 2018\"}", "{\"title\": \"Friendly reminder\", \"comment\": \"Dear reviewers, since the discussion period is ending soon, we kindly remind you of giving us feedback on the revised version of our paper. We took great care and efforts in revising it and addressed all your concerns.\"}", "{\"title\": \"General answer to all the reviewers\", \"comment\": \"We thank all the reviewers for their effort and constructive feedback. All reviewers enjoyed the excellent performance of our method across multiple tasks and its computational efficiency and low memory footprint.\\n\\nWe uploaded a revised version of the paper according to all concerns of the reviewers with modifications appearing in blue. In particular, we made a big effort to add numerical experiments. We address some common/major points in this general answer.\\n\\n- Based on the suggestion of reviewer pWBL, we added an experiment incorporating Laplace degradation noise. Benchmarking against other methods demonstrates the adaptability of our PnP-flow method. Here, we outperform competing methods across all metrics.\\n- We added more diffusion baselines, namely DPS [1] and DDRM [2], as recommended by reviewer n2r9 and pWBL.\\n- As suggested by reviewers pWBL and n2r9, we added numerical experiments for blind inverse problems where the forward is unknown.\\n- Following the suggestion of reviewer fVXi, we added a perceptual metric, LPIPS [3].\\n\\n[1] Chung et al, Diffusion Posterior Sampling for general noisy inverse problems, ICLR 2023\\n\\n[2] Kawar et al, Denoising diffusion restoration models, NeurIPS 2022\\n\\n[3] Zhang, The unreasonable effectiveness of deep features as a perceptual metric, CVPR 2018\"}" ] }
5AoOHSickG
FoundationForensics: Traceback Backdoor Attacks for Vision Foundation Models
[ "Hongbin Liu", "Zedian Shao", "Yuqi Jia", "Jinghuai Zhang", "Minghong Fang", "Cheng Hong", "Neil Zhenqiang Gong" ]
Foundation models are typically pre-trained on uncurated unlabeled data collected from various domains on the Internet. As a result, they are fundamentally vulnerable to backdoor attacks, where an attacker injects carefully crafted poisoned inputs into the pre-training data via hosting them on the Internet. A backdoored foundation model outputs an attacker-desired embedding vector for any input with an attacker-chosen trigger. In this work, we propose FoundationForensics, the first forensics method to trace back poisoned pre-training inputs for foundation models after a backdoor attack has happened and a trigger-embedded input has been detected. Our FoundationForensics first calculates a maliciousness score for each pre-training input by quantifying its contribution to the foundation model's backdoor behavior for the detected trigger-embedded input and then detects the pre-training inputs with outlier maliciousness scores as poisoned. We theoretically analyze the security of FoundationForensics and empirically evaluate it on single-modal and multi-modal foundation models, three datasets, four existing backdoor attacks, and seven adaptive ones. Our results show that FoundationForensics can accurately traceback the poisoned pre-training inputs for foundation models.
[ "Backdoor Attacks", "Foundation Models" ]
https://openreview.net/pdf?id=5AoOHSickG
https://openreview.net/forum?id=5AoOHSickG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vlBvx85XeQ", "p03ZNk7oE3", "iydUJJi8o3", "VLsfzPvcbQ", "3fdrMYJ0dp" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731639537749, 1729696315140, 1730606913982, 1730696422761, 1730563433285 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11897/Authors" ], [ "ICLR.cc/2025/Conference/Submission11897/Reviewer_fdkA" ], [ "ICLR.cc/2025/Conference/Submission11897/Reviewer_AanN" ], [ "ICLR.cc/2025/Conference/Submission11897/Reviewer_AeQw" ], [ "ICLR.cc/2025/Conference/Submission11897/Reviewer_ah3M" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces an innovative forensic technique called **FoundationForensics**, aimed at detecting and tracing poisoned inputs in vision foundation models that have undergone backdoor attacks. The approach relies on a key observation: the similarity among poisoned inputs is generally higher than the similarity between poisoned and clean inputs. Based on this observation, the authors further introduce a \\\"maliciousness score\\\" to measure the contribution of pre-training inputs to the backdoor effect. Additionally, the paper provides a theoretical analysis of the validity of the malicious score and conducts extensive experimental evaluations across multiple foundation models and datasets. The experimental results demonstrate that FoundationForensics can effectively identify poisoned inputs with high accuracy, even in scenarios involving adaptive attacks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\t**Novelty:** This paper addresses a relevant and underexplored problem\\u2014tracking poisoned pre-training data in foundation models\\u2014filling a gap in existing backdoor defenses.\\n2.\\t**Theoretical Basis:** The paper offers a clear theoretical framework, including a proof that the malicious score can distinguish between poisoned and clean inputs, enhancing the credibility of the approach.\\n3.\\t**Comprehensive Evaluation:** The experimental analysis covers multiple datasets, various vision foundation models, and different types of backdoor attacks, showcasing the method\\u2019s generalizability and robustness.\", \"weaknesses\": \"1.\\t**Access Assumptions:** The method assumes access to specific pre-training checkpoints and the ability to perform gradient calculations throughout the pre-training process. However, this assumption may be impractical when dealing with pre-trained models from third-party sources.\\n2.\\t**Limited Coverage of Adaptive Attacks:** Although adaptive attacks were tested, the paper does not deeply explore the limitations of FoundationForensics against more sophisticated adaptive strategies, such as advanced backdoor techniques using natural features as triggers, which might obscure the contribution of poisoned inputs.\\n3.\\t**Storage and Computational Costs:** FoundationForensics relies on storing multiple checkpoints and computing their malicious scores, which could involve substantial computational and storage overhead. In large-scale scenarios, such costs may be prohibitive.\\n4.\\t**Dependency on Malicious Score Sensitivity:** The effectiveness of anomaly detection heavily relies on parameter tuning, particularly the hyperparameter \\\\(k\\\\) in the MAD method. While the paper discusses the choice of \\\\(k\\\\) in a limited setting (e.g., testing PE-II attacks on the CIFAR-10 dataset), different datasets, models, or attack scenarios may require separate adjustments for \\\\(k\\\\).\", \"questions\": \"1.\\t**Robustness Against Advanced Attacks:** How does FoundationForensics handle cases where an attacker embeds multiple triggers in the inputs, potentially reducing the malicious score of individual inputs?\\n2.\\t**Generalization Beyond Vision Models:** Can the method be extended to foundation models beyond the visual domain (e.g., language models)? If so, what modifications are required?\\n3.\\t**Scalability:** As dataset size increases and model complexity grows, how does the computational cost scale? Are there optimization methods to reduce the storage overhead for saving checkpoints?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces FoundationForensics, a pioneering method for tracing back poisoned pre-training inputs in foundation models after a backdoor attack. These models, pre-trained on uncurated data from the Internet, are vulnerable to such attacks, where an attacker inserts malicious inputs to manipulate the model's outputs. FoundationForensics identifies these poisoned inputs by calculating a maliciousness score for each pre-training input and flagging those with outlier scores. The method is both theoretically secure and empirically effective, as shown through tests on various models, datasets, and attack types.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Timeliness and Importance of Topic: The focus on tracing back malicious pre-training inputs in foundation models addresses a critical and timely challenge, especially as the use of such models becomes pervasive across various applications. This work is particularly relevant given the increasing dependence on large-scale, unlabeled datasets sourced from the Internet, where the risk of encountering maliciously poisoned data is high.\\n\\n2. Theoretical Analysis of Maliciousness Score: The inclusion of a theoretical analysis that articulates the properties of the proposed maliciousness score enhances the credibility and robustness of the approach. By providing formal proofs that poisoned inputs contribute disproportionately to the similarity metric exploited by the backdoor, the paper grounds its empirical findings in solid theoretical foundations.\", \"weaknesses\": \"1. Threat Model: A significant concern with the threat model is the assumption that the pre-training dataset is always available for forensic analysis. This assumption may not hold in the context of foundation model pre-training, where the datasets used are often proprietary and not publicly accessible due to privacy or competitive reasons. The paper's applicability is thus questionable in real-world scenarios where access to pre-training data is restricted or non-existent.\\n\\n2. Practicality: The evaluation of the forensic method on datasets with up to 500,000 inputs (as per Table 1(a) raises concerns about its scalability and practicality. Foundation models are typically trained on datasets that are orders of magnitude larger, often encompassing billions of data points. The method's performance and feasibility on such a scale remain untested, which may limit its usefulness in practical, large-scale applications.\\n\\n3. Design Details: The paper distinguishes between \\\"all pre-training inputs\\\" and \\\"pre-training steps that involve $x_i$,\\\" but this distinction might not effectively capture the individual impact of $x_i$. How does the change in Eq(3) show the isolated impact of $x_i$? Also, Line 191 said, \\\"we approximate $f_{t+1}\\u2212 f_t$ as if only the pre-training input $x_i$ was used to update the foundation model.\\\" why is this a fair and reasonable approximation?\", \"questions\": \"Please respond to weaknesses mentioned above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces FoundationForensics, a framework designed for tracing backdoor samples within vision foundation models. The approach leverages detected pairs of backdoor and clean reference samples to compute their contributions to the backdoor loss during the pre-training phase. Samples exhibiting unusually high contributions are flagged as potential backdoor samples. Empirical results across single-modal and multi-modal vision foundation models, tested on three datasets, indicate that FoundationForensics effectively identifies poisoned samples from pre-training sets and surpasses existing baseline methods. The paper also includes a theoretical justification for the framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n\\n2. The proposed method is novel and intuitive.\\n\\n3. The evaluation results demonstrate the potential of the proposed FoundationForensics framework\", \"weaknesses\": \"1. The threat model is weak, particularly assuming having a pair of clean and poisoned samples weaken the contribution of the proposed technique.\\n\\n2. The scalability of the proposed framework is well evaluated, especially on larger datasets.\\n\\n3. Lack of numerical experiments to support the assumption and hypothesis used in the theoretical analysis.\\n\\n4. Lack of discussion on several related works.\\n\\n5. Lack of discussion on potential adaptive attacks, such as a poisoned model with multiple backdoors.\", \"questions\": \"1. Threat Model Assumptions: The reliance on a clean and poisoned sample pair weakens the overall contribution of the proposed technique. Under this threat model, a simpler and more intuitive approach could involve cropping the backdoor trigger from the detected sample and comparing it with training samples. Given that the paper primarily evaluates patch triggers, such a basic method might suffice and diminish the necessity for the more complex FoundationForensicsframework.\\n\\n2. Scalability: The paper does not adequately address the scalability of FoundationForensics, which is crucial given the large-scale, unlabeled data (e.g., LAION[1], DataComp[2]) typically used for training vision foundation models. The current evaluations are limited to small datasets, not reflective of real-world training scales. Although there may not be significant technical barriers to extending the framework to larger datasets, concerns about computational overhead and the storage required for intermediate model checkpoints remain. Additional experiments and discussions in this context would strengthen the paper.\\n\\n3. Theoretical Analysis Validation: Definition I claims that when a backdoor sample updates the model weights, the cosine similarity between it and a reference sample is greater compared to updates from clean samples. Numerical experiments that support this assertion would enhance the theoretical argument. Furthermore, recent research[3] indicates that backdoor training often converges faster than benign tasks. Could this faster convergence create counterexamples to the claims made in Definition I?\\n\\n4. Related Works: The paper lacks a comprehensive discussion and comparison with similar backdoor forensics frameworks, such as [4]. Including these would situate FoundationForensics more clearly within the existing body of research.\\n\\n5. Adaptive Attacks: The framework does not explore potential adaptive attack scenarios. For example, how would FoundationForensics perform if a model contained multiple backdoors? Is it assumed that defenders would possess samples corresponding to each backdoor trigger?\\n\\nReference \\n---\\n\\n[1] Schuhmann, Christoph, et al. \\\"Laion-400m: Open dataset of clip-filtered 400 million image-text pairs.\\\" arXiv preprint arXiv:2111.02114 (2021).\\n \\n[2] Gadre, Samir Yitzhak, et al. \\\"Datacomp: In search of the next generation of multimodal datasets.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Li, Yige, et al. \\\"Anti-backdoor learning: Training clean models on poisoned data.\\\" Advances in Neural Information Processing Systems 34 (2021): 14900-14912.\\n\\n[4] Cheng, Siyuan, et al. \\\"Beagle: Forensics of deep learning backdoor attack for better defense.\\\" arXiv preprint arXiv:2301.06241 (2023).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel forensics method to trace back poisoned pre-training data for backdoored foundation models by quantifying its contribution to the backdoor event. Their proposed metric, maliciousness score, are proved to be effective through extensive experiments. In particular, their theoretical analysis makes their conclusions more reliable.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an intriguing and novel method to trace back poisonous pre-training data for foundation models in the first time. Especially their MAD-base detection method is straightforward.\\n2. The paper also offers a sound theoretical analysis, making their finding more meaningful.\\n3. Regarding experimental setting, they test numerous backdoor attack methods, making their forensics method valid. There are still some spaces for improvement (See weakness)\", \"weaknesses\": \"1. The presentation should be improved. First, the introduction of forensics is inadequate. The paper should introduce more about background of forensics method and how it can be used in real-world scenarios. Second, section 6.1 is not well-structured. For example, the description of which model is trained/finetuned on ImageNet100-B can not be found.\\n2. Their proposed forensics method need the service provider to collect intermediate model checkpoints in advance. This setting makes their method less realistic.\\n3. The only test input-agnostic backdoor attack. Whether their forensics is valid to the input-specifc backdoor (e.g., [1]) is unknown.\\n\\n\\n\\n[1] Lee, Yeonjoon, et al. \\\"Aliasing backdoor attacks on pre-trained models.\\\" *32nd USENIX Security Symposium (USENIX Security 23)*. 2023.\", \"questions\": \"1. For reference and backdoor inputs, do they come from a downstream dataset and never appear in the pre-training dataset?\\n2. Is this method also valid for language foundation models under backdoor attacks (e.g., POR[1] and and NeuBA[2])?\\n\\n[1] Shen, Lujia, et al. \\\"Backdoor Pre-trained Models Can Transfer to All.\\\" *Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security*. 2021.\\n\\n[2] Zhang, Zhengyan, et al. \\\"Red alarm for pre-trained models: Universal vulnerability to neuron-level backdoor attacks.\\\" *Machine Intelligence Research* 20.2 (2023): 180-193.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5Aem9XFZ0t
Zero-shot Concept Bottleneck Models via Sparse Regression of Retrieved Concepts
[ "Shin'ya Yamaguchi", "Kosuke Nishida", "Daiki Chijiwa", "Yasutoshi Ida" ]
Concept bottleneck models (CBMs) are inherently interpretable neural network models, which explain their final label prediction by high-level semantic \textit{concepts} predicted in the intermediate layers. Previous works of CBMs have succeeded in achieving high-accuracy concept/label predictions without manually collected concept labels by incorporating large language models (LLMs) and vision-language models (VLMs). However, they still require training on the target dataset to learn input-to-concept and concept-to-label correspondences, incurring target dataset collections and training resource requirements. In this paper, we present \textit{zero-shot concept bottleneck models} (Z-CBMs), which are interpretable models predicting labels and concepts in a fully zero-shot manner without training neural networks. Z-CBMs utilize a large-scale concept bank, which is composed of millions of noun phrases extracted from caption datasets, to describe arbitrary input in various domains. To infer the input-to-concept correspondence, we introduce \textit{concept retrieval}, which dynamically searches input-related concepts from the concept bank on the multi-modal feature space of pre-trained VLMs. This enables Z-CBMs to handle the millions of concepts and extract appropriate concepts for each input image. In the concept-to-label inference stage, we apply \textit{concept regression} to select important concepts from the retrieved concept candidates containing noisy concepts related to each other. To this end, concept regression estimates the importance weight of concepts with sparse linear regression approximating the input image feature vectors by the weighted sum of concept feature vectors. Through extensive experiments, we confirm that our Z-CBMs achieve both high target task performance and interpretability without any additional training.
[ "concept bottleneck models", "interpretability", "retrieving", "sparse linear regression", "vision-language models" ]
Reject
https://openreview.net/pdf?id=5Aem9XFZ0t
https://openreview.net/forum?id=5Aem9XFZ0t
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wroIslrqK3", "wMH8cHXabj", "umVjKnHtzN", "sCXuirCw7U", "pCrxlfBeqj", "oTFeZQH5EX", "m5BC5ZPKnV", "i65rmczEL0", "guwMiI1WJu", "eykpNOzBWZ", "XBNWRtB41z", "WBvOWId6lT", "V7CpeNXDoB", "UpR8o4jP6n", "UF2Lijd7gV", "Td1U25nZm2", "TGCW1V2oav", "QKP3UrmZX4", "Q1fA9jHv6y", "Nbut0RIJF6", "M3vz4h0Rb9", "KrTzKPrSnJ", "KZObPOAOcY", "KNz8h2VDnj", "KHgm80UgFb", "JgpjGl2Ekl", "Hx8SArElP6", "HlWOrMNkNh", "Gz6zw153P7", "Gf4ZaDBEUZ", "C4kD6uBm4j", "Au9Kh2LTgO", "5PMqNT13z5", "4Wt1LFVX17", "3MsmKqq5la", "1WBZOqQqOt" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733023370146, 1732844311449, 1734349349858, 1732844366156, 1730656037728, 1732602171819, 1732669557312, 1732004385326, 1732504354851, 1732004200412, 1732844442203, 1737523627385, 1732507083603, 1733023330476, 1732617818298, 1732507219174, 1730720080711, 1732004312981, 1730545769561, 1732004749719, 1732777352687, 1732545503165, 1730570428826, 1730110713260, 1732601767743, 1732507134798, 1732602854468, 1732506980027, 1732659381241, 1732505809538, 1729063869786, 1732004491025, 1733023500710, 1732004672743, 1732665484034, 1732004571138 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Area_Chair_QfmU" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_Rb4A" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_mgBg" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_ReJr" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_ReJr" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_xHwx" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_FScg" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_FScg" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_Q8YW" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_xHwx" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_Rb4A" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Reviewer_mgBg" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ], [ "ICLR.cc/2025/Conference/Submission4238/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reminder of end of discussion period\", \"comment\": \"Dear Reviewer Rb4A,\\n\\nThank you for your effort in this review process. We sincerely remind you that the extended discussion period will end in a few days. Since we have addressed your remaining concerns above, we would be happy if you could read them and update your score or leave your additional comments. We are sure that you, the knowledgeable reviewer, will re-evaluate our paper based on them. Finally, we deeply appreciate your participation in this long discussion.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer ReJr,\\n\\nThank you for participating in the discussion. We have addressed your concern in the above and general response (Clarification on the novelty of our work). We respectfully request you to consider raising your rating score accordingly if your concerns are alleviated. Otherwise, we would be happy to hear the remaining concerns that prevent you from doing so and continue to discuss them.\\n\\nBest,\\n\\nAuthors\"}", "{\"metareview\": \"This paper tries to handle the zero-shot scenario in concept bottleneck models, which is different from the existing work using big models to eliminate a dependency on manually-annotated concept labels. The proposed method proposes to construct a large-scale concept bank, which is used for predicting final labels. Some experiments demonstrate the effectiveness of the proposed method from different perspectives. The main concerns come from the technical novelty of this paper. All the reviewers were involved in the open discussion and its discussion point is still the novelty. Based on the results of the first round of comments, rebuttal, author-reviewer discussion, and reviewer-AC discussion, this paper could NOT be accepted for ICLR, due to limited novelty.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers voted low rating scores of this paper in the first round, and, after rebuttal, only one reviewer raised his/her score to borderline acceptance while two reviewers lowered the score after discussion. In the reviewer-AC discussion phase, all the reviewers agree with the limited novelty of this paper.\"}", "{\"comment\": \"Dear Reviewer Rb4A,\\n\\nThank you for participating in the discussion. We have addressed your concern in the above. We respectfully request you to consider raising your rating score accordingly if your concerns are alleviated. Otherwise, we would be happy to hear the remaining concerns that prevent you from doing so and continue to discuss them.\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"The paper proposes a \\u201czero-shot\\u201d concept bottleneck model (CBM). The original idea in CBM is to first represent a given image in the concept space, ie. as a weighted combination of existing concepts and then classifying the image using these concept-weights based image representation. The original works rely supervised training to build image\\u2192concept and concept\\u2192class predicts. More recent works reduce labelling efforts by leveraging the prior knowledge available in vision-language models (VLMs) and LLMs, e.g. \\u201cLabel-Free Concept Bottleneck\\u201d and others.\\n\\nThe recent works on avoiding supervised concept predictor training however still require a training dataset to train concept\\u2192class predictor. This paper basically aims to avoid supervision altogether. The proposed method semi-automatically builds a large concept vocabulary (over existing image caption datasets) , selects the most relevant concepts specifically for a given image according to the image-text representations of a pretrained VLM (eg CLIP) and learns an image-specific concept-text-embeddings to image-embeddings mapping based on reconstruction loss. The resulting concept-embedding based approximation to image\\u2019s representation is used to classify the image based on cosine similarity to the textual embeddings of target class labels.\\n\\n\\n# Post-discussion update\\nI would like to thank the authors and the fellow reviewers for the fruitful discussions. I\\u2019ve read the discussions (including the final responses of the authors), and here is a summary of my opinion:\\n\\n- I expressed my concerns about the possibly misleading results in Table 1, and the random matrix experiment confirms these concerns. The revised paper, however, mitigates this issue by providing a more cautious discussion and supporting it with additional results. I believe the random matrix experiment should also go into any published version of the paper.\\n- I found the CLIP-Scores misleading and authors agreed on that. They proposed to address this concern by separating the CLIP model used within the method from the one used in evaluation. While the authors' proposal to separate the CLIP model used in evaluation reduces some bias, it is likely that CLIP models, despite low-level differences, behave in correlated ways. This limitation makes it challenging to establish CLIP-Score as a fully reliable metric.\\n- I\\u2019m \\u201chappy\\u201d about the hyper-parameter tuning response of the authors, thanks.\\n- Literature discussion is improved but here I do share the concerns of Reviewer mgBg: the arguments remain too strong in many places, like claiming the model as free of from additional training or data. The fact that somebody else trained CLIP on a gigantic dataset doesn\\u2019t alter the dependencies of the proposed approach. The paper shares elements with prior works built on pre-trained classifiers and visual attributes more than what the current text reflects, even after the updates.\\n- Domain dependence: this is still an open concern, but perhaps not a red flag on its own.\\n\\nOverall, I remain inclined towards a \\u2018weak reject\\u2019 due to the concerns outlined above. However, in recognition of the thoughtful improvements addressing some of these concerns, I am raising my score to 6.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper\\u2019s work is an interesting addition to the research on CBMs. It addresses the missing supervision problem that seems to remain unaddressed in prior CBM work and tackles it in a relatively meaningful manner.\", \"The XAI-performance results in Table 3 look impressively good.\", \"The method is simple and easy to understand.\"], \"weaknesses\": [\"The paper\\u2019s results in Table 1 are not impressive , and this is very much expected as the learned CBM-based representation is afterall an image-specific approximation to the image\\u2019s visual representation. It is \\u201cnormal\\u201d that it performs very similar to the CLIP baseline, and I am not sure if the improved performance implies any significant achievement, as the paper lacks any substantial analysis on it (despite commenting that it might be thanks to the reduced representation gap).\", \"It seems to me that the paper\\u2019s results in Table 2 (CLIP-Score) can be misleading because it seems to be measuring the average correlation between the image\\u2019s CLIP-image representation and the obtained CBM representation. As the CBM representation of this paper is a direct reconstruction of CLIP-image representation, it seems again \\u201cnormal\\u201d (not interesting) to observe high scores. (Please correct me if I\\u2019m missing something here.)\", \"How were the hyper-parameters like lambda tuned? Is lambda (and other hyper-parameters, if exists) all the same across all experiments and all datasets? What methodology was used? ie. if one wants to reproduce the exact results from scratch, how should he/she tune the hyper-parameter(s) to reach the same value(s).\", \"The paper seems to be missing one relevant paper from the zero-shot learning domain: \\u201cAttributes2Classname: A discriminative model for attribute-based unsupervised zero-shot learning\\u201d. Similar to the proposed work, this paper learns to represent images in terms of a linear transformation of relevant concepts\\u2019 (predicted attributes\\u2019) textual embeddings, with and without labelled image dataset. It seems to share many motivations like reducing modality gap via representing images in terms of a combination of concept (attribute) textual embeddings and avoiding image supervision, and therefore can/should be discussed within the paper.\", \"The method heavily relies on the prior knowledge of pre-trained VLM (CLIP), and therefore, cannot be used in incompatible domains; unlike (more) supervised CBMs. In that sense, as this paper already relies on a huge training set that the VLM pre-training requires, it is not clear if any real achievement is made in terms of building human-understandable concept-based image representations with reduced supervision, from a philosophical point of view.\"], \"questions\": [\"Can you provide more detailed comparisons to prior work by directly using their concept sets? I am a bit lost in understanding how much the comparisons to prior work are directly one-to-one comparable and what elements make a (positive/negative) difference?\", \"How does the hyper-parameters like lambda for linear regression and lasso affect the results in terms of Table 1, 2, 3 results?\", \"In regards to the performance gains over CLIP image embedding: how well the method performs if you were to use a random matrix as $F_{C_x}$ in Eq 3 & 4, instead of true concept embeddings, for various K values?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"-\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for reading our rebuttal and providing the clarification.\\n\\n> I agree that the specific combination of methods is novel. \\n\\n> I get that the authors are positioning their work as a novel problem setting and establishing a simple and intuitive baseline. \\n\\nWe are glad to see that you recognize our paper's claim and make some concessions about the novelty of our work in the context of interpretable CBMs. \\n\\n> However, the idea of \\\"large-scale concept banks\\\" is not new. Searching for this in Google Scholar yields related papers, such as this one: https://arxiv.org/pdf/1403.7591 from 2014.\\n\\nThank you for providing related work. At a high level, we agree that this work and several existing works share the idea of using a concept bank. Nevertheless, it is worth noting that our Z-CBMs, unlike these works, enable the use of a concept bank with millions of concepts and the application of models to broad domains in a zero-shot manner. We are aware that you may know such a difference, but we would appreciate your consideration in this regard for the final evaluation. \\n\\nFinally, we deeply appreciate you providing positive and knowledgeable review comments and insightful discussions.\"}", "{\"comment\": \"Dear Reviewer Rb4A,\\n\\nThank you for reading our rebuttal and for providing clarification.\\n\\n## **W1**\\n\\n> we can see $F_{C_x}$ as a set of image-specific \\\"basis\\\" vectors, or an image-specific dictionary. If this dictionary is large/rich-enough in terms of the number of vectors and not being highly correlated with each other, then Eq 3 may result in an arbitrarily close approximation to the image's CLIP representation (when the concepts do or do not make sense). Therefore, I would not normally expect an improvement over the CLIP baseline. That's why I had written \\\"I am not sure if the improved performance implies any significant achievement, as the paper lacks any substantial analysis on it (despite commenting that it might be thanks to the reduced representation gap)\\\".\\n\\nSorry for our misunderstanding on your comments. In this regard, we consider that this performance improvement is a side effect that occurred in the process of achieving the objective of the Z-CBMs (i.e., building zero-shot CBMs) and is not really related to the main claim. As you explained above and we mentioned in the submitted paper, the CLIP baseline should be the upper bound in terms of performance. Even so, Figure 7 shows that the modality gap can be reduced by the reconstructed features by Eq. (3), and Table 4 shows that performance improvement occurs when the backbones have relatively weak cross-modal alignment performance. We see that these are interesting findings connected to the existing literature on the modality gap.\\n\\n> In fact, the newly added random matrix experiment (Q3) seems to support my concerns: it appears that K=2048 by default in the experiments, and the performance gap between a purely random set of vectors versus CBMs seems to diminish greatly. The proposed method creates an image-specific vocabulary with vectors in the CLIP embedding space, therefore, it seems \\\"not so surprising\\\" to obtain a pretty good approximation to the original image embedding. In this sense, these results do not support the claim that the provided scheme achieves the desired explainability.\\n\\nWe agree that the concept regression provides a good approximation of an image embedding if the concept candidates are sufficient. Nevertheless, it is important to note that our aim is not just to approximate image embeddings in any way but to estimate the importance of concept candidates from concept retrieval through approximating image embeddings. In fact, as shown in Table 5, Z-CBMs struggle to approximate the image embeddings when using smaller concept banks even though they use the same $K=2048$, i.e., concept retrieval does not necessarily return sufficient concept vocabulary. This indicates that the final approximation results depend on both concept retrieval and concept regression. In this sense, comparing Z-CBMs and the sparse regression with random vectors may be somewhat misleading because the uniformly sampled random vectors have more of a chance to sufficiently cover the image embeddings, and they are not interpretable.\\n\\n## **W2 (and Q1)**\\n\\n> I've noticed the new explanation added to the paper. Still, what I don't find surprising is the fact that the concept weights are selected (in Eq 3) to maximize their correlation with the CLIP feature. Therefore, again, it looks not so surprising that the top weighted ones lead to a high CLIP-Score. Please let me know if I'm missing anything.\\n\\nFrom this series of discussions, we noticed that the root cause of this concern was the use of the same CLIP for CLIP-Score evaluation as for inference. Here, we provide the partial CLIP-Score results with **CLIP-ViT-B/16**, which is not used to implement Z-CBMs in our experiments.\\n\\n**Table 2-4. CLIP-Score Evaluation on ImageNet**\\n|Method|CLIP-Score|\\n| :- | :- |\\n|Label-free CBM|0.7182|\\n|LaBo|0.7341|\\n|CDM|0.7629|\\n|Z-CBM|0.7848|\\n\\nThese results suggest that Z-CBMs can provide input-related concepts even in another multi-modal embedding space. We will replace all of the results by using CLIP-ViT-B/16. Please let me know if we are missing points.\\n\\n\\n## **W3/Q2. How was tuned, and how does it affect results?**\\n> Is the subset that you used a part of the training data, or validation / test split? Which performance metric did you use for hyper-parameter tuning (model selection)?\\n\\nThank you for the question, and sorry for missing information. We searched $\\\\lambda$ on the subset of the training split of ImageNet. We selected a model with the minimum $\\\\lambda$ achieving over 10% non-zero concept ratio when using $K=2048$.\\n\\nWe sincerely appreciate your thoughtful feedback. If there is any misunderstanding, we would appreciate it if you could let us know.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Author Response for Q8YW (2/2)\", \"comment\": \"## **MW4/Q2.** What is the influence of lambda on the performance? Is it dataset-specific?\\nWe selected $1.0\\\\times10^{-3}$ as $\\\\lambda$ by searching from $\\\\{1.0\\\\times10^{-2},1.0\\\\times10^{-3},1.0\\\\times10^{-4},1.0\\\\times10^{-5},1.0\\\\times10^{-6},1.0\\\\times10^{-7},1.0\\\\times10^{-8}\\\\}$ to choose the minimum value achieving over 10\\\\% non-zero concept ration when using $K=2048$ on the subset of ImageNet training set. We used the same $\\\\lambda$ for all experiments. We will add this description to Sec. 4.1. We also show the effects of $\\\\lambda$ in Figure 8 in the revised paper. Using different lambda varies the sparsity in concept regression and accuracy. Therefore, selecting appropriate $\\\\lambda$ is important for achieving both high sparsity and high accuracy.\\n\\n## **MW5/Q3/Q4.** Is using proper negative concepts beneficial? What are NOT concepts in Fig. 3?\\nYes, Z-CBMs originally used negative concepts, and they are beneficial. The NOT concepts in Fig.3 are defined as the predicted concepts with negative weights by following Oikarinen et al. (2023). In Fig.3, we printed the top 5 concepts when sorting the absolute values of weights. Therefore, \\\"NOT macro rope\\\" can mean that it is not likely \\\"macro rope,\\\" providing information on the classification boundaries of the model.\\n\\n## **MW6/Q2.** What similarity function is used in the clip space? Is it cosine similarity? Is $F_{C_x}W$ normalized? \\nYes, the similarity function $\\\\mathrm{Sim}(\\\\cdot)$ in Eq. (1) is cosine similarity (L153). $F_{C_x}W$ is not explicitly normalized because $f_\\\\mathrm{V}(x)$ and each concept feature of $F_{C_x}$ in Eq. (3) is normalized, i.e, concept regression finds $W$ to approximate the normalized features in Eq. (3). We will add these descriptions.\\n\\n## **MW7/Q3.** Is using (cosine) similarity between images and concepts helpful for input-to-concept prediction?\\nYes, Z-CBMs actually use the cosine similarity in concept retrieval in Eq. (1). Similarly, we also tried to use the cosine similarity for the concept-to-label prediction (concept regression) in Table 6 (\\\"CLIP Similarity\\\"). However, it failed to achieve practical performance, indicating the signals from the cosine similarity are not perfect to represent the concept's importance. This is the reason why we introduce concept regression into the concept-to-label prediction to accurately estimate the importance.\\n\\n## **SW1.** Do LLMs predict the final labels from top-K retrieved concepts?\\nTechnically, yes. However, using LLMs in the prediction completely loses interpretability because LLMs are inherently black-box models constructed from transformers. Since our goal is to build an interpretable model, in such cases, we require another unobvious interpretable method for LLMs to interpret their outputs. In this sense, our Z-CBMs can straightforwardly provide the interpretability represented by concept weights from the sparse regression.\\n\\n## **SW2.** On the effectiveness of uni-modal search and cross-modal fusion\\nThank you for the suggestion. The direction of using uni-modal search and cross-modal fusion like [ReCo 2024] is interesting. We implemented this approach by using the union caption dataset of CC3M, CC12M, and YFCC15M. Table 5-1 shows the result. The uni-modal search & cross-modal fusion significantly degraded the performance. This is due to the mismatch between retrieved captions and class label texts. Further, this approach loses direct interpretations of input image features by swapping them to retrieved images' captions.\\n\\n## In table 1: the bold facing of performance should include the zero-shot/linear-probe CLIP.\\nWe will fix the presentation by following your advice.\\n\\n## It is unclear why the zero-shot CLIP model should be considered as the upper bound.\\nThis is because the objective of concept regression defined by Eq. (3) is to approximate the input image features by the concept candidate features. We will revise L283 as \\\"it is expected to perform as the upper bound of the Z-CBMs' performance because Eq. (3) aims to approximate the input image features by concept candidates.\"}", "{\"comment\": \"Thanks for the provided responses. I believe this paper demonstrates innovation and value, but it also has certain shortcomings, as mentioned by myself and other reviewers. For me, the main concern lies in the insufficient interpretability of some concepts used by the authors. Therefore, I will maintain the current score.\"}", "{\"title\": \"Author Response for mgBg\", \"comment\": \"Thank you for your positive and valuable feedback. We address your concerns bellow. We will revise our paper according to your suggestions.\\n\\n## **W1/Q1.** The inference cost is significantly increased. / Comparison to existing CBMs on inference speed.\\nThank you for this comment. We respectfully note that the direct comparison of inference speed between Z-CBMs and existing CBMs is unfair because the problem setting is different, i.e., existing CBMs require training. Even so, in order to identify the limitations of Z-CBMs and the level to which future research should aim, we compared the inference speed. We compared Z-CBMs with Label-free CBMs as follows.\\n\\n**Table 6-1. Evaluation on ImageNet**\\n| Method | Top-1 Accuracy | Inference Time (ms) |\\n| :-| :-| :-|\\n| Label-free CBM | 58.00| 3.30 |\\n| Z-CBM (K=128) | 54.91| 7.87 |\\n| Z-CBM (K=256) | 57.83| 11.64 |\\n| Z-CBM (K=512) | 60.86| 17.31 |\\n| Z-CBM (K=1024) | 61.92| 33.75 |\\n| Z-CBM (K=2048) | 62.70| 55.63 |\\n\\nWe can see that ZCBMs are at least twice as expensive to infer as Label-free CBMs. As the concepts contained in images are not known in the zero-shot problem setting, a relatively large value of $K$ to accommodate any concept for high accuracy needs to be selected. In this regard, future work could include approaches such as dynamically creating a cache of concepts according to task. We will add this discussion to the paper.\\n\\n## **W2.** On the relevance to \\u201cVisual Classification via Description from Large Language Models\\u201d\\nThank you for sharing related work. The mentioned paper proposed a zero-shot classification method by LLM-generated text descriptions for each target class. In contrast to Z-CBMs, which directly decompose input features by concepts, this method makes a prediction based on the correlation between the input features and the task-specialized texts. In this regard, this method can also provide interpretability in a different way from Z-CBMs, but it requires (i) generating the task-specialized texts with LLM, (ii) restricting the inference algorithm to the CLIP style zero-shot classification, and (iii) calculating the contribution scores for each text independently. On the other hand, Z-CBMs can be used for arbitrary domains without external LLMs and arbitrary inference algorithms (e.g., training heads). Further, Z-CBMs can provide relative contribution scores among concepts by sparse regression, i.e., we can compare different concepts quantitatively. We will add the discussion to Sec. 5.\\n\\n## **Q2.** I\\u2019m wondering whether the visual and textual features are in the same space as shown in Fig 2 (a)\\nThank you for this question. We show the PCA feature visualization in Fig.7 of the revised paper; we use PCA instead of t-SNE because PCA provides more realistic intuition in the feature space with computing only linear transformations. We see that, in reality, the clusters of visual and text features are separated. This known as the modality gap, is one of the challenges of CLIP representations [b]. Although mitigating the modality gap is out of the scope of our paper, Z-CBMs can alleviate the modality gap by interpreting input images by textual concepts, as seen in Fig. 7. We consider this is the reason why Z-CBMs improve the zero-shot CLIP in Table 1.\\n\\n[b] Liang, Victor Weixin, et al. \\\"Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning.\\\" NeurIPS 2023.\\n\\n## **Q3.** Concepts such as \\u201cNot maltese dog terrier\\u201d cannot provide interpretable information for identifying categories\\nThank you for this comment. Strictly speaking, we consider that the appropriate granularity of concepts depends on use cases and is undecidable in general. For example, if the user wants a fine-grained classification of dogs, \\\"Not maltese dog terrier\\\" might be useful information for assessing the decision boundary. If one wants to avoid concepts similar to target class names, one can control the appearance by modifying the threshold in the concept filtering (L225). Similarly, we can avoid specific sets of concepts by directly removing them from the concept bank. However, automatically controlling concepts and their granularity in general cases is an open question and should be resolved in future work.\"}", "{\"comment\": \"Dear Reviewer xHwx,\\n\\nThank you for participating in the discussion. We have addressed your concern in the general response (Clarification on the novelty of our work). We respectfully request you to consider raising your rating score accordingly if your concerns are alleviated. Otherwise, we would be happy to hear the remaining concerns that prevent you from doing so and continue to discuss them.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer FScg,\\n\\nThank you again for your positive and constructive review comments. We respectfully remind you that the discussion period will end in a few days. We have responded above to your concerns about the technical innovation of our paper. We would appreciate it if you would take the time to read and comment on the responses.\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Reminder of end of discussion period\", \"comment\": \"Dear Reviewer ReJr,\\n\\nThank you for your effort in this review process. We sincerely remind you that **the extended discussion period will end in a few days**. Since we have addressed your remaining concerns above, we would be happy if you could read them and update your score or leave your additional comments. We are sure that you, the knowledgeable reviewer, will re-evaluate our paper based on them. Finally, we deeply appreciate your participation in this long discussion period.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response.\\n\\nWhile the response addressed some of my concerns, several issues remain.\\n\\nFirst, your explanation that performance improvements over the original CLIP are due to reducing the modality gap is unconvincing. As I mentioned earlier, the modality gap still exists in the image-to-concept matching stage. It\\u2019s unclear how introducing an intermediate \\u201cconcept\\u201d stage in Z-CBM, rather than direct image-to-label matching, effectively reduces this gap. Moreover, if reducing the modality gap were truly the key factor behind the performance improvement, Z-CBM should consistently outperform direct image-to-label matching across various backbones, but that is not the case.\\n\\nSecond, I have also read the comments from other reviewers and share their concerns regarding the limited novelty. \\n\\nBased on these reasons, I retain my original score.\"}", "{\"comment\": \"Dear Reviewer ReJr,\\n\\nThank you again for your insightful review comments. We respectfully remind you that **the discussion period will end in a few days**. We have responded above to your concerns. We believe these address your concerns, especially on the modality gap reduction. We would appreciate it if you would take the time to read and comment on the responses.\\n\\nBest,\\nAuthors\"}", "{\"summary\": \"This paper addresses zero-shot scenario for concept bottleneck models (CBMs). Previous methods successfully eliminate a dependency of manually annotated concept labels via large language models (LLMs) and vision-language models (VLMs). However, they still require training the models on target dataset and not applicable to the zero-shot scenarios. Zero-shot CBMs (Z-CBMs) constructs large-scale concept bank using caption datasets and noun parser, retrievals concept candidates following input images, and predicts final labels using the retrieved concepts. The experiments demonstrate the effectiveness of the proposed method on target task performance and interpretability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy-to-follow.\", \"The main idea is straightforward and intuitive.\", \"Target task performance is competitive. Table 1 shows that the proposed method even outperforms the performance of the original CLIP, and Table 2 shows that a simple trainable variant of the proposed method outperforms the previous method in the same setting.\"], \"weaknesses\": [\"The reason for performance improvement compared to the original CLIP is unclear. The paper argues that it is due to a reduced modality gap in the concept-to-label mapping. However, this claim is not fair since the modality gap still exists in the input-to-concept mapping. Furthermore, since CLIP is trained on image-to-text matching, the claim that performance improves due to a reduced modality gap in text-to-text matching also requires sufficient references.\", \"I'm not entirely clear on the advantages of this approach over the most basic interpretable approach based on CLIP. Specifically, one could retain the standard CLIP classification process and simply retrieve concepts from a concept bank using visual features for interpretability. While this baseline is hard to address concept intervention, it doesn't seem to offer significant differences in terms of interpretability.\", \"The performance difference between linear regression and lasso in Table 6 is unclear. Linear regression should estimate the original visual features ($f_V(x)$) more accurately, so why does linear regression perform so poorly here?\"], \"questions\": [\"Why was linear regression used instead of lasso in L426-427?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for Q8YW (1/2)\", \"comment\": \"Thank you for your knowledgeable review and many interesting suggestions. We address your concerns below. We are sorry for the multiple responses but hope that these responses help you to re-evaluate our work.\\n\\n**Table 5-1. Additional evaluation on ImageNet**\\n| Method | Top-1 Accuracy |\\n| :- | :- |\\n| Zero-shot CLIP | 61.88|\\n| Z-CBM (Ours) | **62.70**|\\n| ConSe | 25.19|\\n| class-to-concept | 61.86|\\n| Z-CBM w/ positive weight | 39.77 |\\n| uni-modal search & cross-modal fusion | 10.11|\\n\\n## **MW1/Q1.** Novelty is limited. How is this work different from [ConSe 2014]?\\nThank you for providing related work. Although they are important and should be discussed, our work is totally different from theirs and has remarkable novelty. Here, we focus on the difference from [ConSe 2014], which is your primary concern. \\nFirst of all, **the zero-shot classification with [ConSe 2014] is completely different from that with Z-CBMs**. ConSe infers a target label from a semantic embedding composed of a weighted sum of concepts of the single predicted ImageNet label, whereas Z-CBMs infer the label by the concept features retrieved from the concept bank and weighted by sparse regression. ConSe has three potential risks: (i) Since the zero-shot inference depends on the ImageNet label space, it cannot accurately predict target labels if there are no target-related labels in ImageNet, (ii) the prediction accuracy largely depends on the model's ImageNet accuracy, (iii) the cosine-similarity based prediction can produce concepts semantically overlapped with each other and thus the accuracy and interpretability is restricted.\\nIn contrast, our Z-CBMs directly decompose an input image feature into concepts via concept bank, so it is not restricted to any external fixed label spaces. Our concept regression can also provide semantically identical concepts via sparse regression algorithm. \\nFurthermore, we compared the performance between ConSe and Z-CBMs; we implemented ConSe with pre-trained CLIP and concept bank, which were in the same setting as Z-CBMs. The results are shown in Table 5-1 and Table 1 of the revised paper. **Our Z-CBMs largely outperformed ConSe, indicating that the methodology of Z-CBMs is superior to and different from ConSe**. We will add ConSe as a zero-shot baseline and discuss it in the main paper.\\n\\nThe other works partially share some technical components with our work (e.g., sparse regression), but they differ greatly from our Z-CBMs: [Costa 2014] also depends on the existing classifier in the same fashion as ConSe, [Write 2013] requires additional training for seen classes, and [Object2Actions 2015] limits the domains to action recognition. More importantly, none of them focus on the interpretability of the models' decisions.\\nIn contrast, our work is the first work to build interpretable zero-shot concept bottleneck models from the combination of concept retrieval and concept regression using a large-scale concept bank without any additional training and restrictions of domains, as acknowledged the technical novelty by Reviewers xHwx and mgBg.\\n\\n## **MW2/Q3.** Weighting concept based on target classes\\nThank you for this suggestion. We did not consider this direction because it does not achieve the goal of concept bottleneck models, i.e., predicting concepts from input, and then predicting final labels from the concepts. This goal is essential to provide the interpretability of the output. Even so, we evaluated this variant as shown in Table 5-1 (the row of \\\"class-to-concept\\\"). The variant of \\\"class-to-concept\\\" slightly decreased the zero-shot baseline performance. This may be because retrieving and weighting concepts from target class texts are not helpful in reducing the modality gap between image and text in contrast to Z-CBMs. Therefore, this direction does not seem promising in terms of either interpretability or performance.\\n\\n## **MW3/Q3.** Positive weight constraint on the linear regression would be beneficial\\nSince $F_{C_x}$ is defined in the real number space, the retrieved concept vectors also contain negative values. In such a sense, the linear regression requires the negative values of $W$ to reconstruct image features $f_\\\\mathrm{V}(x)$ that are also defined in the real number space. In fact, when constraining the weights to be positive, the performance was significantly degraded (\\\"Z-CBM w/ positive weight\\\" in Table 5-1). Further, the negative concepts are helpful for a more detailed understanding of the decision boundary.\"}", "{\"summary\": \"This paper utilizes a large-scale concept bank and a dynamic concept retrieval method to make high-accuracy predictions without requiring additional training on a target dataset. By employing sparse regression to identify and weigh the importance of relevant concepts from the bank, Z-CBMs achieve effective concept-to-label mappings while ensuring model interpretability, addressing the limitations of previous concept bottleneck models that relied on extensive manual data collection and training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. To the best of my knowledge, this paper is the first to propose a zero-shot Concept Bottleneck Model (CBM), marking a significant contribution to the field of CBMs. Furthermore, the proposed zero-shot CBM method exhibits predictive capabilities comparable to those of CLIP, while its architecture enhances the model's interpretability.\\n\\n2. The experiments presented in this paper are comprehensive and well-executed, encompassing 12 datasets. Despite the absence of suitable benchmarks, the authors have effectively compared their method with zero-shot CLIP and other training head approaches.\\n\\n3. This paper introduces the concept of a \\\"concept bank\\\" and employs an efficient concept retrieval method for label prediction based on this foundation. The concept bank is constructed through the analysis of extensive datasets. In Section 4.6.2 and Table 1, the authors provide a detailed comparison of zero-shot performance across different sizes of concept banks, demonstrating that expanding the concept bank enhances the expressive capacity of the CBM, thereby improving its zero-shot performance.\", \"weaknesses\": \"1. While this article provides a valuable comparison of various methods related to the concept bank, it appears that the testing results for a specific approach\\u2014constructing a concept bank using a question-and-answer method similar to the label-free CBM [1]\\u2014are not included. Including this method, particularly in the context of designing a smaller, domain-specific concept bank, could enhance the comprehensiveness of the analysis. I encourage the authors to include a comparison with a concept bank generated using the question-and-answer approach from the label-free CBM, as this would provide a deeper understanding of the different concept bank construction approaches.\\n\\n2. The paper mentions that the regular term in sparse regression can help reduce conceptual redundancy; however, it lacks specific visual results to illustrate this effect. Additionally, the advantages of using sparse regression in comparison to other distance metrics in feature space for weight determination are not clearly established. To strengthen the paper, I suggest that the authors provide visual examples comparing the concepts selected by sparse regression versus other methods, demonstrating how redundancy is reduced. Furthermore, including a quantitative comparison of sparse regression against other weighting methods would enhance the clarity and convincing nature of the proposed method.\", \"reference\": \"[1]. Oikarinen, Tuomas, et al. \\\"Label-free concept bottleneck models.\\\" arXiv preprint arXiv:2304.06129 (2023).\", \"questions\": \"1. This article compares various methods related to the concept bank. However, I may have overlooked the testing results for a specific approach: constructing a concept bank using a question-and-answer method similar to the label-free CBM [1]. This involves designing a smaller concept bank tailored to the problem domain.\\n\\n2. In this paper, it is mentioned that the regular term in sparse regression can help reduce conceptual redundancy. Could you please provide some specific visual results to illustrate this effect? Additionally, I\\u2019m curious about the advantages of sparse regression compared to using distance or other metrics in feature space to determine weights. If there are any experimental results that demonstrate this comparison, it would certainly enhance the persuasiveness of the method presented in your paper. \\n\\n3. I noticed the inference time presented in Figure 6. Could the authors clarify whether this represents the total time for the entire zero-shot inference process? As the scale of the concept bank expands, it is important to understand how embedding and concept retrieval times may increase. I would appreciate it if the authors could provide a breakdown of the reported times, detailing the components of the inference process (e.g., embedding, concept retrieval, regression) and how these times are affected as the concept bank size increases.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for ReJr\", \"comment\": \"We appreciate your constructive and insightful feedback. We address your concerns below. We will revise our paper according to your suggestions.\\n\\n## **W1.** Why does the proposed method improve the original CLIP? Is the modality gap really reduced?\\nThank you for this valuable comment. Existing research [a] showed that converting the modality improves performance by reducing the modality gap. To further evaluate the modality gap, we compare the distances (modality gap) among images, weighted sums of concepts by Eq.(3), and ground-truth class prompt texts on the feature spaces with ImageNet samples by following [b]. The L2 distances were $1.74\\\\times 10^{-3}$ in image-to-label and $0.86\\\\times 10^{-3}$ in concept-to-label, demonstrating that there is indeed the modality gap, and Z-CBMs largely reduce it by representing images with textual concepts. Fig. 7 in the revised paper also shows the PCA feature visualizations, where the weighted concepts are located between image and text feature clusters, emphasizing the reduction of the modality gap. We will add this discussion with the reference and additional experiments to Sec. B.1.\\n\\n[a] Qian, Qi et al. \\\"Intra-modal proxy learning for zero-shot visual categorization with clip.\\\" NeurIPS 2023.\\n\\n[b] Liang, Victor Weixin, et al. \\\"Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning.\\\" NeurIPS 2023.\\n\\n## **W2.** On the advantages of Z-CBMs compared to a simple interpretation of embedding.\\nOne clear advantage of Z-CBM is that it ensures interpretability by guaranteeing that the final prediction is made from interpretable concepts. That is, **we cannot ensure interpretability by just retrieving concepts because this does not guarantee that the concepts really contribute to the prediction**. To ensure interpretability, our Z-CBMs estimate the importance of the concepts by concept regression and predict the final labels by actually using the weighted concept features. As you commented, intervention is also an advantage of Z-CBMs since it provides meaningful interpretations by changing parts of concepts. Further, the performance improvements by reducing the modality gap can be an advantage, as shown above.\\n\\n## **W3.** On the performance difference between linear regression and lasso\\n**This is due to the unstable numerical computation of linear regression**. If the feature dimension $d$ is smaller than the concept retrieval size $K$, the Gram matrix of $F_{C_x}$ in linear regression will be rank-deficient, i.e., there is no inverse matrix for the closed-form solution, and lasso can avoid this by sparse regularization. In our setting, we used $d=512$ and $K=2048$ as the default, so the unstable computation prevents the optimization from convergence. Even if this is not the case, the Gram matrix might be rank-deficient since concept retrieval can generate concepts that correlate with each other. We will add this explanation to Sec. 4.6.4.\\n\\n## **Q1.** Why was linear regression used instead of lasso in the intervention experiments?\\n**This is because the intervened concepts are already sparse**. In Sec. 4.5, we intervened the concepts with non-zero coefficients after sparse regression. Since the number of non-zero concepts is smaller than the feature dimension, we do not need to consider the unstable computation problem discussed above. If lasso is used here, it is inappropriate as an intervention experiment because the ground-truth concept does not influence the output when sparse regularization eliminates it.\"}", "{\"title\": \"Clarification on the novelty of our work\", \"comment\": \"Dear Reviewers,\\n\\nThank you for reading our rebuttal and participating in the discussion. Since several reviewers question the novelty of our work, let us clarify the position and novelty here.\\n\\n- We claim novelty in (i) proposing a new zero-shot problem setting of concept bottleneck models (CBMs) where we do not train models with additional training datasets and (ii) building simple yet practical zero-shot concept bottleneck models (Z-CBMs) by combining concept retrieval with large-scale concept banks and sparse concept regression to estimate the importance of the retrieved concepts.\\n- We do not claim the novelty of each technical component of Z-CBMs, but this does not hurt our novelty of (ii). We acknowledge that several existing studies have already used concept banks to retrieve related concepts and sparse regression to estimate the contributions of input variables. However, to our knowledge, Z-CBM achieves the first successful results to build CBMs in a zero-shot manner. We used these well-studied technical components in Z-CBMs to focus on building a simple baseline for this new problem setting.\\n- While the individual technical components are not unique, we show the validity of the Z-CBM's design through the ablation studies of concept banks (Table 5) and regression algorithms (Table 6). Additionally, inspired by the comments from Reviewer Q8YW, we also show the comparison results using the existing zero-shot classification baseline (ConSe) in Table 1. These suggest that the combination of concept retrieval and concept regression for building CBMs is unique, and the design is reasonable for achieving practical performance.\\n- We also agree with Reviewer Q8YW's opinion that we should discuss the similarity between Z-CBMs and existing works from broader perspectives. Thus, we will add more detailed and careful discussions with respect to zero-shot classification to the main paper by extending Section C.\\n\\nWe hope these clarifications eliminate your concerns about the novelty of our work.\\n\\nFinally, again, our primary contribution is opening up a new research field of zero-shot CBMs, not the novelty of individual technical components.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for the response. I strongly concur with Reviewer Q8YW that the novelty in this paper is limited, and the response here does not alleviate this concern from my point of view.\\n\\n> Specifically, the design of Z-CBMs that search input-related concept candidates from large-scale concept banks and then predict the importance of each concept by sparse regression is not obvious and has significant novelty. Also, since this paper is the first work on zero-shot CBMs, it is important to solve the problem as simply as possible as a baseline for future work.\\n\\nI agree that the specific combination of methods is novel. However, the idea of \\\"large-scale concept banks\\\" is not new. Searching for this in Google Scholar yields related papers, such as this one: https://arxiv.org/pdf/1403.7591 from 2014. \\\"predicting the importance of each concept by sparse regression\\\" is certainly not new. LIME uses this. \\n\\nI get that the authors are positioning their work as a novel problem setting and establishing a simple and intuitive baseline. Therefore, I maintain my original positive score.\"}", "{\"summary\": \"The authors propose a variant of concept bottleneck models (CBM) which uses sparse linear regression on a databank of concepts to approximate the visual feature of each image. The resulting CBM can achieve reasonable zero-shot accuracy and CLIP-score without additional training. The proposed framework does not require additional data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The method is simple.\\n\\n(2) The results are good. The authors show that their ZS-CBM achieves SoTA accuracy among prior CBMs. They also demonstrate the quality (relevance) of the selected concepts using CLIP-score results.\", \"weaknesses\": \"I can't find anything wrong with this paper except perhaps the lack of technical innovation. There is abundant literature on concept bottleneck models. Sparse regression on concept features is very widely used. Using retrieval to find relevant concepts is not technically interesting. In my opinion, this work does not add much value to the existing CBM literature.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper zero-shot concept bottleneck models are discussed as a means of obtaining explainable (in terms of concepts) zero-shot classifiers (as in classes without explicit training data). The idea is to use a concept bottleneck model to translate an image into a set of visual concepts, and then use these concepts to classify the image into one of the target classes of the benchmark dataset. Since the train/test data of the benchmark data is not explicitly used, this is a form of zero-shot classification. In the proposed method, the concept bank consists of about 5M concepts obtained from image captioning datasets (including e.g. Flickr-30K and the YFCC-15M dataset). The image and all the concepts are encoded in the CLIP embedding space, and then the top K most similar ones are used. From this set of concepts a sparsely weighted CLIP feature vector is constructed, which is then used to find the nearest target class y. This model is evaluated on 12 classification tasks and performs similar to zero-shot CLIP.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The ideas presented in this paper make a lot of sense, and the manuscript is clearly written. The relevance becomes clear from the amount of related research in this direction of \\u2018attribute-based\\u2019 or \\u2018concept-based\\u2019 zero-shot classification, which dates back at least to 2010s. However, this is also the largest weak point of the paper, the novelty compared to papers and ideas presented back then is not clearly stated, nor is the paper compared to any other zero-shot method besides retrieving in the CLIP space. Of course some techniques / methods did not exist back then (eg the CLIP embedding space), that does not make this paper substantially different.\", \"weaknesses\": [\"## Major weakness\", \"The major weakness of this submission is the novelty with respect to previous work (likely of a previous generation, before deep learning took off). The idea of zero-shot classification in a visual-semantical space based on a joint embedding is not novel. A good example is [**ConSe 2014**], where imagenet classifiers are used together with a Word2Vec space to compose a Word2Vec embedding for an image (based on the classifier outputs and the word embeddings of the class names], which is then used for zero-shot classification in text space. This is extremely similar to the posed idea, except that now a CLIP space is used. Also the idea of using a (sparse) regression of the concepts has been explored before [**Write 2013, Costa 2014, Objects2Actions 2015**]. None of these papers uses an explicit attribute/concept-to-class mapping as the seminal work of Lampert et al. [**AwA 2013**], they all used a discovered attribute-to-class mapping based on an embedding space [**ConSe 2014, Objects2Actions 2015**] or based on co-occurrence statistics or web search [**Costa 2014**], including co-occurences from the YFCC dataset, also used in this work.\", \"The *only* difference I see with respect to these works, is that the concept bank used in this paper is much larger and that a CLIP embedding is used. Based on the previous works, the following questions are interesting, but not explored in this submission:\", \"The weighting of concepts is now based on the input image, it could also be done based on the target classes (ie, each class selects the top-K concepts which are most similar, or find the most co-occurring concepts in the captioning datasets)\", \"The weights of a concept in the linear regression model can be negative, this is unlikely to be beneficial given that the used concepts are the top-K most relevant for this particular image. Would it make sense to restrict W to be positive?\", \"What is the influence of lambda on the performance? And on the sparsity? Is the optimal lambda dataset specific? It seems that the current value (1x10-5) is extremely small, compared to the size of W (which has K weights, with K ~1000).\", \"Using proper negative concepts for a class is likely to be beneficial, given that knowing what is not related to the target class is a strong signal, could that be explored as well?\", \"What similarity function is used in the clip space? Is it cosine similarity? Is Fcx W normalized?\", \"The similarity between a concept and the image is now an indicator function only (concept in top-K concepts for this image). While, the similarity value might contain a strong signal of relevance. It could make sense to use the similarity value between the image and the concepts also in constructing the concept clip embedding of the image.\", \"## Secondary weaknesses / suggestions\", \"1. The second step, the final label prediction (Eq 4) is a purely textual reasoning problem. In the light of the enormous reasoning power of the LLMs, it could be explored if LLMs would be able to reason about the final class provided the top-K concepts from the previous stage.\", \"2. A suggestion for an additional exploration. In this submission, the CLIP space is searched in a cross-modal setting, from an input image to a target/output text. While in [**ReCo 2024**] it has been shown that uni-modal search works much better (image-image) and then use cross-modal fusion (use the textual description of that image). This could be exploited (e.g.) by using (image, caption) pairs from the image datasets. It would be interesting to study if different search strategies improve the zero-shot classification performance.\", \"### Minor/Nitpicks\", \"In table 1: the bold facing of performance should include the zero-shot/linear-probe CLIP.\", \"It is unclear why the zero-shot CLIP model should be considered as the upper bound of the proposed method. The proposed method uses the (implicit) knowledge of millions of additional (image, text) pairs.\", \"# Post Rebuttal Evaluation:\", \"It seems that we reached consensus that this paper brings hardly any technical novelty, but uses existing ideas from zero-shot classification for the zero-shot concept bottleneck model idea. In that light, I believe that the paper should be rewritten completely, putting a fair treatment of related work from the zero-shot classification community, discussing the similarities and the differences, explaining why known methods from these days can't be used, or even better just evaluate at least a handful of these ideas within this new domain. I still don't see the major conceptual difference between describing an action based on imagenet classes (as eg in [*Objects2Actions 2015*]) with describing an class with concepts from a concept bank. Such a rewrite is imho beyond a conference submission revision, hence looking forward seeing a revised version of this paper in the future.\", \"### References\", \"[**AwA 2013**]: Attribute-based classification for zero-shot visual object categorization, TPAMI 2013.\", \"[**ConSe 2014**]: Zero-Shot Learning by Convex Combination of Semantic Embeddings, ICLR 2014.\", \"[**Costa 2014**]: COSTA: Co-Occurrence Statistics for Zero-Shot Classification, CVPR 2014.\", \"[**Objects2Actions 2015**]: Objects2action: Classifying and localizing actions without any video example, ICCV 2015.\", \"[**Write 2013**]: Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions, ICCV 2013.\", \"[**ReCo 2024**]: Retrieval Enhanced Contrastive Vision-Text Models, ICLR 2024.\"], \"questions\": \"1. The main question is how is this work different from [**ConSe 2014**] (and other similar works), and then beyond that they use an ImageNet classifier to transform images to text, and here a CLIP space has been used. So, please clarify what novel contributions the method makes beyond using a CLIP embedding space instead of Word2Vec + ImageNet classes?\\n\\n2. Please clarify: (a) the used CLIP similarity function, (b) Fcx W being normalized, (c) the influence of lambda.\\n\\n3. Please discuss the open directions (taken from previous research): weighing based on target classes, restricting W to positive weights only, using negative concepts in a proper manner, using similarity value.\\n\\n3. From Figure 3 it becomes clear that some concepts are negated, for example `NOT macro rope` (bottom row, right). How is this defined? Is the `not` a part of the concept, and hence used encoded in `f_T(concept)` vector, or is the `not` a result of the linear regression, for these concepts with a negative weight in W? Please elaborate whether it is conceptually desired that concepts in the top K most related concepts for an image could be negative weighed for the image-text embedding.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed response. Your experiments have eliminated my concerns.\\n\\nI have noted the concerns previously raised by Reviewers Q8YW and FScg. I also doubt the novelty of this paper.\\n\\nTherefore, I decided to retain my previous score.\\n\\n\\nBest,\\\\\\nReviewer\"}", "{\"comment\": \"Dear Reviewer Rb4A,\\n\\nThank you again for your detailed and constructive review comments. We respectfully remind you that **the discussion period will end in a few days**. We have responded above to your concerns. We believe these address your concerns. We would appreciate it if you would take the time to read and comment on the responses.\\n\\nBest,\\nAuthors\"}", "{\"comment\": \"Dear Reviewer xHwx,\\n\\nThank you for reading our rebuttal. We are glad that our responses have addressed your concerns.\\n\\n> I have noted the concerns previously raised by Reviewers Q8YW and FScg. I also doubt the novelty of this paper.\\n\\nFor the novelty of our work, we have explained the details based on scientific facts in the responses, including additional evaluations and a discussion of related work in the revised paper. Thus, we would be happy if you could decide the final score by reading the discussion fairly.\\n\\nFinally, we appreciate you providing your detailed and helpful review comments and taking the time to read our rebuttal.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer xHwx,\\n\\nThank you again for your constructive and helpful review comments. We respectfully remind you that **the discussion period will end in a few days**. We have responded above to your concerns. We believe these address your concerns. We would appreciate it if you would take the time to read and comment on the responses.\\n\\nBest,\\nAuthors\"}", "{\"comment\": \"Dear Authors,\\n\\nThanks for your careful responses. \\n\\n# W1 \\nI think there is a bit of misunderstanding regarding my comment. As opposed to the summary given as part of the author response, I do not say that \\\"Accuracy is not impressive\\\" (which would imply an absolute sense of accuracy), and \\\"the paper lacks analysis of the modality gap\\\" is not really my point. What I mean is the following: we can see $F_{C_x}$ as a set of *image-specific* \\\"basis\\\" vectors, or an image-specific dictionary. If this dictionary is large/rich-enough in terms of the number of vectors and not being highly correlated with each other, then Eq 3 may result in an arbitrarily close approximation to the image's CLIP representation (when the concepts do or do not make sense). Therefore, I would not normally expect an improvement over the CLIP baseline. That's why I had written \\\"I am not sure if the improved performance implies any significant achievement, as the paper lacks any substantial analysis on it (despite commenting that it might be thanks to the reduced representation gap)\\\". \\n\\nIn fact, the newly added random matrix experiment (Q3) seems to support my concerns: it appears that K=2048 by default in the experiments, and the performance gap between a purely random set of vectors versus CBMs seems to diminish greatly. The proposed method creates an image-specific vocabulary with vectors in the CLIP embedding space, therefore, it seems \\\"not so surprising\\\" to obtain a pretty good approximation to the original image embedding. In this sense, these results do not support the claim that that the provided scheme achieves the desired explainability.\\n\\n\\n# W2 (and Q1)\\n\\nI've noticed the new explanation added to the paper. Still, what I don't find surprising is the fact that the concept weights are selected (in Eq 3) to maximize their correlation with the CLIP feature. Therefore, again, it looks not so surprising that the top weighted ones lead to a high CLIP-Score. Please let me know if I'm missing anything.\\n\\n# W3/Q2. How was tuned, and how does it affect results?\\n\\nIs the subset that you used a part of the training data, or validation / test split? Which performance metric did you use for hyper-parameter tuning (model selection)?\"}", "{\"comment\": \"Thank you for reading our rebuttal and providing additional comments. We are delighted that you have found innovation and value in our research. As for the interpretability given by Z-CBM, we acknowledge that the control of the granularity of concepts is not perfect, especially as mentioned in the rebuttal, but we believe that this issue should be advanced in future research as another research question. Nevertheless, we would be honored to discuss this perspective with you. We thank you for your professional review comments, constructive discussions, and timely and knowledgeable replies.\"}", "{\"summary\": \"This paper proposed a novel approach for achieving zero-shot image classification based on explainable Concept Bottlenecks. Compared with existing Concept Bottleneck Models, the authors proposed approach gets rid of the requirement of labelled training data for learning the mapping network from concepts to categories by fitting the image representation with concept features. The experimental results verified the effectiveness of this approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper provides a novel interpretable zero-shot image classification method.\\n\\nCompared with existing Concept Bottleneck Models, the proposed method eliminates the requirement of labeled training data.\\n\\nThis paper provides a tool for researchers to understand the semantics of CLIP-extracted visual features.\", \"weaknesses\": \"The inference cost is significantly increased due to the extremely large concept bank and the test-time learning process.\\n\\nThis paper lacks discussion of other training-free concept bottleneck approaches, e.g., \\u201cVisual Classification via Description from Large Language Models\\u201d.\", \"questions\": \"The authors may consider comparing the inference speed between the proposed approach and existing CBMs.\\n\\nI\\u2019m wondering whether the visual and textual features are in the same space as shown in Fig 2 (a) for fitting image features with textual features of candidate concepts, considering that they are from two modalities and in the pre-training stage, the text features and visual features are aligned by cross-entropy loss rather than strictly calibrated by L2 Loss. The authors may consider showing a t-SNE figure to clarify this.\\n\\nThe authors may consider evaluating the interpretability of the candidate concepts. In my opinion, concepts such as \\u201cNot maltese dog terrier\\u201d cannot provide interpretable information for identifying categories.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response for xHwx\", \"comment\": \"We appreciate your constructive and helpful comments and suggestions. We address your concerns below. We will revise our paper according to your suggestions.\\n\\n## **W1/Q1.** Additional experiments using concept bank with question-and-answer (Q&A) approach\\nThank you for this valuable suggestion. We have shown in the GPT-3 (ImageNet Class) row of Table 5 an experiment using a concept bank (4K concepts) with a Q&A approach of Label-free CBM. In order to deeper understand the importance of the concept bank, we now compare it with a baseline utilizing the same concept bank. The results are as follows.\\n\\n**Table. 4-1 Evaluation on ImageNet with GPT-3 (ImageNet Class) Concepts**\\n|| Top-1 Acc. | CLIP-Score |\\n| :- | :- | :- |\\n| Label-free CBM|58.00|0.7056|\\n| CDM |62.52|0.7445|\\n| Z-CBM (Zero-shot)|59.18|0.6276|\\n| Z-CBM (Training Head)|62.73|0.6276|\\n\\n**Our Z-CBMs achieved competitive performance even when using such a smaller concept bank.** However, they degraded CLIP-Score. This suggests the importance of the concept bank. In our zero-shot setting, covering sufficient concept knowledge with an abundant vocabulary is important to map concept-to-label without learning. We will add this result and discussion to Sec. 4.6.\\n\\n## **W2/Q2.** Qualitative and quantitative concept comparisons of linear regression and lasso\\nWe also thank this insightful suggestion. For qualitative evaluation, we add the concept visualizations of Z-CBMs with linear regression to Fig. 3 of the revised paper. We can see that **linear regression tends to produce concepts that are related to each other**. In fact, quantitatively, we also found that the averaged inner CLIP-Scores among the top-10 concepts of lasso is significantly lower than that of linear regression (0.6855 in lasso vs. 0.7826 in linear regression), which means that **lasso produces more independent concepts than linear regression**. These results emphasize the advantage of using sparse regression like lasso in concept regression to reduce redundancies of the concepts. \\n\\n## **Q3.** Does Fig. 6 represent the total time for a zero-shot inference? Provide a breakdown of the reported times.\\nYes, the y-axis of Fig. 6 represents the total time. A detailed breakdown is as follows.\\n\\n**Table. 4-2 Execution time for zero-shot inference (milliseconds)**\\n| $K$ | total | feature extraction | concept retrieval | concept regression |\\n| :- | :- | :- | :- | :- |\\n| 128 | 7.87 | 0.12 (1.5%)| 1.00 (12.7%)| 6.63 (84.2%)|\\n| 256 | 11.64 | 0.11 (1.0%)| 1.68 (14.5%)| 9.69 (83.2%)|\\n| 512 | 17.31 | 0.11 (0.7%)| 1.87 (10.8%)| 15.15 (87.5%)|\\n| 1024 | 33.75 | 0.12 (0.4%)| 3.11 (9.2%)| 29.88 (88.5%)|\\n| 2048 | 55.63 | 0.11 (0.2%)| 5.35 (9.6%)| 49.23 (88.5%)|\\n\\nFor any $K$, the computation time of concept regression is dominant for the total. This is a limitation of Z-CBMs: there is a trade-off between computation time and the accuracy of the zero-shot inference. Therefore, we will address resolving this trade-off and speeding up the zero-shot inference in future work. We will revise 4.6.3 to add this discussion.\"}", "{\"title\": \"Reminder of end of discussion period\", \"comment\": \"Dear Reviewer xHwx,\\n\\nThank you for your effort in this review process. We sincerely remind you that **the extended discussion period will end in a few days**. Since we have addressed your remaining concerns above and agreed with other reviewers on the novelty of our works, we would be happy if you could read them and update your score or leave additional comments. We are sure you, the knowledgeable reviewer, will re-evaluate our paper based on them. Finally, we deeply appreciate your participation in this long discussion and would like your thoughts.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Author Response for Rb4A\", \"comment\": \"We appreciate your detailed and professional review and address your concerns below. We will revise our paper accordingly.\\n\\n## **W1.** Accuracy is not impressive, and the paper lacks analysis of the modality gap.\\nThank you for this comment. We respectfully remark that our main contribution is building practical zero-shot interpretable models without target datasets or training (L078). Table 1 supports our practicality claim, as naive methods struggle to achieve competitive performance (Table 6). To analyze the modality gap, we followed [b] and measured L2 distances: $1.74\\\\times 10^{-3}$ for image-to-label and $0.86\\\\times 10^{-3}$ for concept-to-label features. This demonstrates that **Z-CBMs significantly reduce the modality gap via concept regression**. Additionally, Fig. 7 shows PCA feature visualizations, indicating that weighted concept sums effectively bridge image and text modalities. These findings highlight that our results are not \\\"normal\\\" and offer novel insights.\\n\\n[b] Liang, Victor Weixin, et al. \\\"Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning.\\\" NeurIPS 2023.\\n\\n## **W2.** CLIP-Score may be misleading since it uses reconstructions from image features.\\nApologies for any confusion. Table 2 presents averaged scores of the top-10 concepts, selected by sorting absolute regression coefficients without using reconstructed vectors from Eq. (3). We also did not weigh concepts by coefficients when computing CLIP-Score. Thus, **the scores are not obvious, as they depend on the regression algorithm used**. For example, lasso outperforms linear regression, indicating better concept selection (Table 2-1).\\n\\n**Table 2-1. Evaluations of Z-CBMs on ImageNet**\\n|Regression Alg.|Top-1 Acc.|CLIP-Score|\\n|:-|:-|:-|\\n|Linear Regression|52.88|0.7076|\\n|Lasso|62.70|0.7746|\\n\\n## **W3/Q2.** How was $\\\\lambda$ tuned, and how does it affect results?\\nWe selected $\\\\lambda = 1.0\\\\times10^{-5}$ by searching from ${10^{-2},10^{-3},\\\\dots,10^{-8}}$, aiming for a non-zero concept ratio above 10\\\\% with $K=2048$ on a subset of ImageNet. This $\\\\lambda$ was used consistently across all experiments. As shown in Fig. 8 of the revised paper, varying $\\\\lambda$ affects concept sparsity and accuracy, underscoring the importance of careful $\\\\lambda$ selection for balancing sparsity and performance.\\n\\n## **W4.** On the relevance to \\\"Attributes2Classname (A2C)\\\"\\nThank you for sharing related work. Indeed, A2C shares ideas of predicting textual concepts (attributes) from images and then predicting the final class from the concepts.\\nHowever, unlike A2C, which requires supervised training for both image-to-concept and concept-to-label mappings, our Z-CBMs operate without additional training or datasets.\\n\\n## **W5.** Z-CBMs rely on CLIP and cannot be used in incompatible domains\\nThis is an inherited limitation of Z-CBMs and VLM-based CBMs (e.g., Label-free CBMs). Nevertheless, we already have domain-specific CLIP models like MedCLIP [c] and can easily expect such CLIP variants to appear in each domain. In such a sense, one advantage of Z-CBMs over other CBMs is the availability without learning when such foundation models emerge. Besides, giving interpretability to the foundation models through Z-CBMs will also serve as a baseline for building supervised CBMs, making a fundamental contribution to this area.\\n\\n[c] Wang, Zifeng, et al. \\\"Medclip: Contrastive learning from unpaired medical images and text.\\\" EMNLP 2022.\\n\\n## **Q1. Can you provide more detailed comparisons to prior work with their concept sets?**\\nYes. We have already shown the Z-CBM's result of GPT-3 generated concepts of Label-free CBMs in Table 6. Here, we compare the performance when using the identical concept set:\\n\\n**Table. 2-2 Evaluation on ImageNet with GPT-3 (ImageNet Class) Concepts**\\n||Top-1 Acc.|CLIP-Score|\\n|:-|:-|:-|\\n|Label-free CBM|58.00|0.7056|\\n|CDM |62.52|0.7445|\\n|Z-CBM (Zero-shot)|59.18|0.6276|\\n|Z-CBM (Training Head)|62.73|0.6276|\\n\\nAlthough this is not a fair comparison since the baselines learn concept-to-label mapping on supervised datasets while Z-CBMs do not, our Z-CBMs achieved competitive performance. However, the CLIP-Scores are lower than Z-CBMs (All) in Table 2. This suggests the importance of using a large-scale concept bank to accurately map concept-to-label without learning.\\n\\n## **Q3. How well the method performs if using random matrix as concept features $F_{C_x}$?**\", \"we_evaluated_this_case_on_imagenet_as_follows\": \"**Table 2-3. Top-1 Accuracy on ImageNet**\\n|$K$|Z-CBM|Random|\\n| :- | :- | :- |\\n|128|54.91|13.90|\\n|256|57.83|36.08|\\n|512|60.86|50.41|\\n|1024|61.92|55.91|\\n|2048|62.70|61.88|\\n\\nUsing random matrices gradually approach the zero-shot baseline performance (61.88) as $K$ increases. This is because larger numbers of random vectors have more chance to contain similar vectors to image features. However, **real concepts of Z-CBMs always outperformed random concepts, demonstrating the advantage of using meaningful concepts**.\"}", "{\"comment\": \"Dear Reviewer ReJr,\\n\\nThank you for reading our rebuttal and providing clarifications.\\n\\n> First, your explanation that performance improvements over the original CLIP are due to reducing the modality gap is unconvincing. As I mentioned earlier, the modality gap still exists in the image-to-concept matching stage. It's unclear how introducing an intermediate \\\"concept\\\" stage in Z-CBM, rather than direct image-to-label matching, effectively reduces this gap.\\n\\nWe agree that the modality gap still exists in image-to-concept matching. Conversely, because of the modality gap, the approximation by the textual concept vector of image embeddings by sparse regression is not perfect, i.e., the regression losses are not zero, as we can see in the image and reconstructed feature clusters in Fig 7. Therefore, we consider that such imperfect text-modal approximation of image embeddings results in an improved modality gap during concept-to-label matching, leading to improved performance.\\n\\n> Moreover, if reducing the modality gap were truly the key factor behind the performance improvement, Z-CBM should consistently outperform direct image-to-label matching across various backbones, but that is not the case.\\n\\nWe do not claim that Z-CBMs always improve the zero-shot CLIP baseline in arbitrary cases. Table 4 shows that the performance improvement is caused when the backbone CLIP has relatively small parameters and lower performance. This indicates that Z-CBMs can help the model improve performance when the backbone's modality alignment capability is not so strong. We also respectfully note that performance improvement is not the main contribution of our work. So, the analysis of the modality gap is for explaining why Z-CBMs improve the CLIP baseline in some cases, not for supporting our main claim, i.e., building zero-shot CBMs.\\n\\n> Second, I have also read the comments from other reviewers and share their concerns regarding the limited novelty.\\n\\nFor the novelty of our work, we have explained the details based on scientific facts in the responses, including additional evaluations and a discussion of related work in the revised paper. Thus, we would be happy if you could decide the final score by reading the discussion fairly.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Author Response for FScg\", \"comment\": \"Thank you for your positive review.\\n\\n> I can't find anything wrong with this paper except perhaps the lack of technical innovation. There is abundant literature on concept bottleneck models. Sparse regression on concept features is very widely used. Using retrieval to find relevant concepts is not technically interesting. In my opinion, this work does not add much value to the existing CBM literature\\n\\nThank you for mentioning this. We respectfully note that our paper has a technical innovation in addition to the existing CBMs. First of all, the idea of building CBMs for arbitrary vision-language models in a zero-shot manner is technically novel, as acknowledged by Reviewers xHwx and mgBg. Specifically, the design of Z-CBMs that search input-related concept candidates from large-scale concept banks and then predict the importance of each concept by sparse regression is not obvious and has significant novelty. Also, since this paper is the first work on zero-shot CBMs, it is important to solve the problem as simply as possible as a baseline for future work. To this end, designing a method with well-known technical components such as retrieval and sparse regression is reasonable. As you commented, simplicity is also a strength, so we would be happy if you look at the contributions that have made CBM possible with zero-shots, rather than the simplicity of individual techniques.\"}" ] }
5AJ8R4z5g0
Potential Outcomes Estimation Under Hidden Confounders
[ "Ahmed Aloui", "Juncheng Dong", "Ali Hasan", "Vahid Tarokh" ]
One of the major challenges in estimating conditional potential outcomes and the conditional average treatment effects (CATE) is the presence of hidden confounders. Since testing for hidden confounders cannot be accomplished only with observational data, conditional unconfoundedness is commonly assumed in the literature of CATE estimation. Nevertheless, under this assumption, CATE estimation can be significantly biased due to the effects of unobserved confounders. In this work, we consider the case where in addition to a potentially large observational dataset, a small dataset from a randomized controlled trial (RCT) is available. Notably, we make no assumptions on the existence of any covariate information for the RCT dataset, only requiring the outcomes to be observed. We propose a CATE estimation method based on a pseudo-confounder generator and a CATE model that aligns the learned potential outcomes from the observational data with those observed from the RCT. Our method is applicable to many practical scenarios of interest, particularly when privacy is under concern (e.g., medical applications). Extensive numerical experiments are provided demonstrating the effectiveness of our approach for both synthetic and real-world datasets.
[ "Confounders", "Causal Inference", "Treatment Effects" ]
Reject
https://openreview.net/pdf?id=5AJ8R4z5g0
https://openreview.net/forum?id=5AJ8R4z5g0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ysGiFRjohq", "tLbWAGd2IV", "sIh7ps2TcR", "kXELAD09gS", "k221BcOO08", "hjrx2CrrOr", "gw7rDL5UBp", "eRt4yzyf5f", "XghQ4Eh0hF", "WEKmP4Fop6", "T63rmel5tm", "QXVQC68uDh", "NDYVTCEhXP", "N7mrzLdHzA", "LIf4iNOpVb", "IGXEnjzwRb", "DvpR4VTvWD", "DWQU9fysvC", "BfdZzQPsKE", "BSoCQwv8c7", "BP4yo0QV83", "Alr9kLKiuu", "8uxNwkHtke", "72IH0gmI8V" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732307577178, 1729109909125, 1731501869046, 1731503108042, 1731511997977, 1729181833962, 1731529405535, 1730740889762, 1733195902921, 1732308094085, 1731509212546, 1731501796749, 1734923251133, 1731515624487, 1731504660216, 1732308375765, 1733195039420, 1730689819856, 1733195075708, 1733195931705, 1731504132229, 1731515356882, 1737523952250, 1731501625385 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Reviewer_dgih" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Senior_Area_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8972/Reviewer_aUnR" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Reviewer_hGG4" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Area_Chair_rbPH" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Reviewer_7fgj" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Submission8972/Reviewer_aUnR" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8972/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer hGG4,\\n\\nThank you for taking the time and effort to review our paper. \\n\\n**Assumption on Potential Outcome Distributions**\\n\\nYou are correct that the assumption that the potential outcomes of the RCT distribution are the same as those of the observational data potential outcomes is strong and may not hold in practice. We made this assumption for ease of mathematical analysis. However, in our experimental setting, the three real-world datasets do not satisfy this assumption, and our method still demonstrates competitive performance. Please note that this is not an uncommon practice in the causal inference literature as it is the case when asserting that treatment effects are identifiable when an RCT is available.\\n\\nIn future work, we plan on extending the theoretical results to the more realistic scenarios, such as when a selection bias impacts the distribution of RCT participants compared to that of observational data participants.\"}", "{\"summary\": \"The paper introduces two novel methods: Marginals Balancing (MB) and Projections Balancing (PB), to address the challenge of estimating Conditional Average Treatment Effects (CATE) under hidden confounding. These methods leverage outcome-only Randomized Controlled Trial (RCT) data to mitigate bias from unobserved confounders, outperforming previous approaches. The combination of MB and PB (MB+PB) further enhances performance in both synthetic and real-world datasets, providing good results even with limited RCT data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Empirically, the combined MB+PB method demonstrates superior performance compared to established baseline methods (albeit the experimental setting satisfies the strong assumptions imposed by the authors)\", \"The idea of enforcing a balancing constraint to learn CATE is, to my knowledge, novel and interesting.\"], \"weaknesses\": [\"**Main concern**: A critical assumption of the paper is that potential outcomes share the same distribution across both RCT and observational study. This is (almost) never the case in real-world data where even assuming the two distributions have the same conditional mean ($E_{P^{rct}}[Y|X] \\\\equiv E_{P^{os}}[Y|X] $, aka transportability) is considered too strong. Unfortunately, this limitation is never discussed in the paper and the methodology heavily relies on this assumption.\", \"**Secondary concern**: The authors assume access to only the outcomes and treatments from RCT data, without the accompanying covariates. This scenario is rare and not typically observed in practical settings, and the authors never discuss the plausibility of their setting. Clarification and examples where such a condition might realistically occur would strengthen the motivation behind the proposed methodology.\", \"**Concerns on writing**\", \"A significant portion of the paper is dedicated to explaining that hidden confounding introduces bias in treatment effect estimation. While this is fundamentally important, it is a well-acknowledged concept in existing literature, and thus, a more concise treatment might suffice.\", \"The manuscript's extensive use of bold font could be seen as distracting, reducing the emphasis intended for the most critical points.\"], \"questions\": [\"Could the authors provide realistic scenarios where only the outcomes and treatments from an RCT are available, but not the covariates?\", \"Could the authors elaborate on their assumptions regarding the distribution of potential outcomes, specifically these distributions are identical for both randomized and observational data?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continuation Response to your wrong, unethical, careless and probably machine generated allegations/review (2)\", \"comment\": \"As shown, there is no evidence to support your claim of false references. Furthermore, suggesting that more of our manuscript may be generated by an LLM, based solely on this incorrect assertion, is speculative and lacks any concrete basis.\", \"incorrect_line_references\": \"Several of the reviewer\\u2019s comments are linked to irrelevant line numbers, making them difficult to address:\", \"line_58\": \"You reference this line to be a paragraph headline, yet it points to Figure 1.\", \"line_70\": \"Your comment about missing references is directed at a figure caption, where no references are applicable.\", \"line_35\": \"There is no discussion of consistency at this line, contrary to your claim.\\nThese errors suggest a significant lack of attention to the manuscript or confusion about its content.\", \"vague_and_unhelpful_critiques\": \"You state that our presentation \\\"contains errors, hindering easy understanding of the line of thought,\\\" without specifying any instance of such errors. This type of feedback is too vague to be actionable, and we ask for specific examples to improve our work meaningfully.\", \"technical_misunderstandings\": \"You question the definition of $g$ in Proposition 3.2, although it is explicitly defined within the proposition. This indicates either a lack of familiarity with standard mathematical notations or insufficient attention to the manuscript.\", \"conclusion\": \"While we value constructive criticism, your review includes several baseless claims, vague critiques, and frequent inaccuracies. We believe that these issues hinder the academic integrity of the review process. We are used to low quality reviews and materially wrong reviews at ML conferences, but your baseless allegations are outrageous. \\n\\nWe request that either you immediately show some remorse, and enter a serious apology or else we will write to technical committee with the screenshot of your dishonest comments included and request an ethics investigation in this matter.\\n\\n Best regards,\\nThe Authors.\"}", "{\"title\": \"Demanding Investigation of Reviewer aUnR for Ethical Violation\", \"comment\": \"Dear TPC Chairs, Senior Area Chairs, Area Chairs\\n\\nI am the senior author of this submission. I am writing to report a potential violation of ethics codes by reviewer aUnR.\\nAs you can see the reviewer makes the allegation that the citations of our paper are non-existent, fake and machine generated and that the paper in part may be LLM generated. \\n\\nIn the response that I just posted, we gave every single citation of the paper and provided links to every citation. This demonstrates the falseness of allegations made by the reviewer.\\n\\nWhile we value constructive criticism, the reviewer makes several baseless claims, vague and non-sensial critiques, and the review has many frequent inaccuracies. We believe that these issues hinder the academic integrity of the review process. \\n\\nNote that I am a seasoned researcher and I and my students are used to low quality reviews and materially wrong reviews at ML conferences, but I find the reviewer aUnR baseless allegations very outrageous.\\n\\nI am writing to you to demand an ethics investigation into reviewer aUnR actions in this matter. From reading his/her/their review, I suspect that this review may be machine generated in part (as it is very non-coherent).\\n\\nI will write to you directly (outside of the Openreview platform) to formally demand an investigation unless the reviewer aUnR retract his/her/their review.\\n\\nBest regards, \\nSenior Author\"}", "{\"title\": \"Untrue Claim By Senior Author\", \"comment\": \"Hi Senior Author,\\n\\nI am your Senior Area Chair, and you did not have a discussion with me. Please refrain from claiming that you have had interactions that you have not. \\n\\nBest,\\nYour Senior Area Chair\"}", "{\"summary\": \"The paper proposes a method for CATE estimation under unobserved confounding. It assumes the presence of a small RCT dataset to correct for the biased introduced due to the unobserved confounding.\\nImportantly, the paper does not reuqire the covariates of the RCT data to be observed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method does not require covariate information for the RCT dataset. This is convenient in practice and differentiates the method from existing works.\"], \"weaknesses\": [\"A discussion of methods for CATE estimation under hidden confounders (e.g. sensitivity analysis) is missing. Furthermore, a comparison to such methods is missing.\", \"This prevents a fair assessment of the method's usefulness.\", \"The method is compared to a factual learner, which is not a proper baseline for comparison, as the biasedness of the learner is known and also has been discussed in the paper.\", \"The povided empirical insights throughout Section 3 are misleading.\", \"The method is restricted to binary treatments. This is a disadvantage of the proposed method in comparison to existing methods for CATE estimation under hidden confounding.\", \"The presentation contains errors, hindering easy understanding of the line of thought. There is only limited flow in the paper.\", \"Limited referencing of existing works (e.g., line 70, 356)\", \"The paper only has limited theoretical justification.\", \"Evaluation: A proper evaluation of CATE estimatiors is only possible on synthetic or semi-synthetic datasets. However, the paper only considers one very low-dimensional\", \"dataset with a very simple data generation mechanism. This is not sufficient to evaluate the general performance of the proposed method. An evaluation on further more complex\", \"datasets with higher-dimensional unobserved confounding is necessary to properly evaluate the method.\"], \"questions\": [\"How would the method extend to non-binary treatments?\", \"Line 35: In my opinion, this definition implies the consistency assumption. However, the assumption is not stated.\", \"Line 58: The headline of this paragraph is \\\"performance metric\\\". However, this is not stated in the paragraph.\", \"Confounding degree: Why is this introduced here? If it is considered further on, a proper introduction of sensitivity models and sensitivity analysis is necessary.\", \"IMHO, Section 3.1 is unnecessary and only hinders the reader's concentration. Unobserved confounding is a known topic in causal inference.\", \"There is no need for a separate case study.\", \"Line 270: What is meant by pseudo-confounder?\", \"Line 296: Why is it reasonable to generate a pseudo-confounder? How is this done? What are the theoretical guarantees for the generated confounder?\", \"Proposition 3.2: What is g? Why is it necessary?\", \"Equation 8: This needs more mathematical explanation or at least references to works covering the respective theory\", \"What is the theoretical justification for the final method MB+PB?\", \"Figure 8: Although the combined MB+PB performs much better then the methods separately, the PEHE is still quite high (considering that the figure plots sqrt(PEHE) against log(Gamma)).\", \"The figure thus does not aid in assessing the usefulness of the proposed method. A fair comparison with other baselines (even partial identification) would be helpful.\", \"Figure 8: Why does the Obs-Oracle PEHE increase with log(Gamma) even if there is no unobserved confouding introduced in the data?\", \"Lines 459/460: The paper states that \\\"MB+PB shows strong robustness\\\". However, this is not shown in the figure. Considering the scale of figure 8, MB+PB is not robust with regard to Gamma.\", \"Influence of RCT sample size: Again, further evaluation is necessary to draw the general conclusions stated in the paper.\", \"Table 1 shows that MB+PB in some cases outperforms other methods which additionally consider covariate information from the RCT. This is counter-intuitive. How can this be explained?\"], \"flag_for_ethics_review\": \"['Yes, Unprofessional behaviors (e.g., unprofessional exchange between authors and reviewers)', 'Yes, Other reasons (please specify below)']\", \"details_of_ethics_concerns\": \"The paper cites false and non-existing references. This indicates unprofessional research behavior and the use and abuse of LLMs. Due to this fact, I am sadly wondering if more parts of the paper are generated by an LLM. For more details, please see my official comment above.\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer aUnR,\\n\\nI am the student author, and I am writing to sincerely apologize for the oversight in copying author names from the NIH website, specifically from: https://pubmed.ncbi.nlm.nih.gov/38641741/\\n\\nWhile translating the reference from the NIH website into bibtex format, I inadvertently made errors in transcribing the full names. The original reference was provided in the following style:\\n\\nFeuerriegel S, Frauen D, Melnychuk V, Schweisthal J, Hess K, Curth A, Bauer S, Kilbertus N, Kohane IS, van der Schaar M. Causal machine learning for predicting treatment outcomes. Nat Med. 2024 Apr;30(4):958-968. doi: 10.1038/s41591-024-02902-1. Epub 2024 Apr 19. PMID: 38641741. \\n\\nIn the process of converting this reference to a bibtex format, I mistakenly altered some of the names. I cannot pinpoint precisely how this happened, but I want to clarify that the reference was not generated from scratch by a language model (LLM). I used tools like Writefull and Grammarly at the sentence level (as disclosed in the submission), and these may have unintentionally modified the names.\\nNote that all other references are correct as I just double checked them and there were no errors in the other names. \\n\\nI want to assure you that no part of the paper was generated by an LLM, aside from sentence-level edits as disclosed. \\n\\nBest,\\nStudent Author\"}", "{\"summary\": \"This paper uses RCT paired with observational data to help mitigate the impact of confounding in the observational data. The difference from other work in this literature is that it assumes no covariates are available in the RCT sample and instead tries to match the distribution of potential outcomes between the two populations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The issue is important, and the setting where covariate information is not available for the RCT does appear in unfortunately many applications.\", \"weaknesses\": \"The identification strategy in the paper appears to hinge on the assumption that the potential outcome distribution is exactly the same between the two populations. This is very unlikely: RCT populations are notoriously different than observational populations in domains like health (where the privacy issues that the paper's motivation points to are most likely to arise) because trial recruitment is very far from just sampling the general population (e.g., sicker patients and patients from minority groups are typically underrepresented, along with a range of other issues) . Most work in the literature makes the weaker assumption that the CATEs are equal in the two populations and what differs is just the marginal distribution of covariates, which is much more plausible if the set of covariates available is sufficiently rich. I cannot think of any settings where imposing exact equality in the potential outcome distributions by themselves is a plausible assumption.\", \"questions\": \"Are there application settings where the above assumption is justified?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer aUnR,\\n\\nAs we near the end of this discussion period, we wish to extend our sincere thanks for the time and effort you have invested in reviewing our work.\\n\\nThis message serves as a gentle reminder to let us know if you have any additional questions or if you could specify the lines you were referring to regarding the additional references. \\n\\nBelow, we provide additional clarifications on some points that were not addressed in our response above:\\n\\n**Confounding degree:**\\n\\nWe introduce the discussion in the background the study the sensitivity of our proposed approach to the confounding degree.\\n\\n**Pseudo-confounder:**\\n\\nThe pseudo-confounder is a random variable that when introduced will make the distribution of the predicted potential outcomes the same as that of the RCT potential outcomes. \\n\\n**Obs-Oracle PEHE increasing:**\\n\\nWe hypothesize that this occurs because the data size is fixed, and as confounding increases, the different treatment populations may not be adequately represented for the neural network to learn the functions optimally.\\n\\n**MB+PB in some cases outperforms other methods:**\\n\\nThat is the main contribution of the paper, as we observed that competitive results can be achieved using less information compared to baseline methods.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer 7fgj,\\n\\nThank you for taking the time and effort to review our paper. \\n\\n**Theoretical Analysis**\\n\\nWe agree with the reviewer that the two proposed regularizers require further theoretical investigation from an optimization and filtering theory perspective. However, the main idea of the method is to propose these methods as potential solutions and empirically demonstrate the efficacy of the methods. Moreover, we did include a theoretical construct for the Projections Balancing regularizer where we provide an upper bound on the error in estimating the true conditional potential outcome. \\n\\n**Sample Sizes and Estimation Outcome**\\n\\nAs shown in Figure 9, even a small size RCT outcomes significantly reduces the EPEHE, indicating that only a relatively small amount of RCT data is needed compared to observational data. \\n\\n**ACTG performance**\\n\\nPlease note that CorNet performs better on ACTG while using more information (i.e., the *covariates* of the RCT data). In contrast, our method achieves a competitive performance while relying solely on RCT *outcomes*. We hypothesize that CorNet superior performance may result from (i) its neural net capturing the true relationship with the small RCT data and (ii) that the distributional shift between RCT and observational is not severe. These two conditions give CorNet advantages over our proposed approach.\"}", "{\"title\": \"I just had a discussion with Senior Program Chair\", \"comment\": \"I just had an email discussion with the senior program chair.\\n\\nAgain, you made some very serious allegations, and it was so serious that made me extremely upset. I do not get upset by a paper being rejected (as I said we all know about the quality of the reviews in ML conferences), but these kinds of allegations are too much.\\n\\nI have foreign students who are not familiar with other national names, and one of them had a couple of typos in entering a first name or two in one reference (while working very late). I am sorry if you may be one of the authors of that paper. \\n\\nJust so that everyone knows, I always re-write the entire text of the paper as students' English is not perfect. Granted that I missed a couple of typos in one of the references but the allegation that we produced this paper with LLMs is ridiculous.\", \"now_you_say\": \"\\\"Directly at the beginning of the reviewing period, I wrote a comment to the program committee about potential ethical concerns. I thought that this comment would become public during rebuttal time. Therefore, I referred to the \\\"comment above\\\" in my review. As this is not the case, I will repost my comment below.\\\"\\n\\nWhat makes you think a couple of typos make a paper LLM generated while your own review is so incoherent?\\n\\nIn anyways, my comment that your review was LLM generated was out of being provoked by your allegations. I am sorry about that. However, I maintain that your review is incoherent\\\\ and has many issues. That said, I appreciate the time that you put into it. I do not claim to be a better reviewer than you are\\n\\nBest wishes\\n\\nSenior Author\"}", "{\"title\": \"Continuation Response to your wrong, unethical, careless and probably machine generated allegations/review (1)\", \"comment\": \"[15] Tobias Hatt, Daniel Tschernutter, and Stefan Feuerriegel. Generalizing off-policy learning under sample selection bias. In Uncertainty in Artificial Intelligence, pp. 769\\u2013779. PMLR, 2022b.\", \"link\": \"https://www.tandfonline.com/doi/full/10.1080/01621459.2017.1319839\"}", "{\"metareview\": \"The paper proposes to account for unobserved confounding in observational causal inference by assuming the availability of outcome data from a small RCT, which is used to train a generator model to ensure that counterfactuals from observational data match the RCT outcomes. Several empirical demonstrations show improvement of CATE estimation compared to existing techniques.\\n\\nReviewers have acknowledged the importance of the problem, particularly the fact that availability of covariate information from RCTs not being a requirement which contributes to the overall novelty of the contribution. \\n\\nHowever, reviewers have pointed out that exactly matching counterfactuals may not be ideal given that trial populations often vary significantly compared to observational data samples, lack of sufficient theoretical analysis, lack of sufficient discussion of sensitivity analysis literature, which is highly relevant, lack of justification of when the scenario that no covariates are available from RCTs holds in practice, clarity issues in writing in that a lot of space is dedicated to explaining the bias due to hidden confounding but insufficient justification of the proposed method. \\n\\nAuthor rebuttal has addressed some of the concerns but in general, the justification and responses are insufficient, the draft requires a considerable pass to improve clarity, provide additional justification for the setup, and justify the assumption of transportability of the CATE estimate to begin with. \\n\\nOverall based on these concerns, I recommend a reject.\", \"additional_comments_on_reviewer_discussion\": \"No additional concerns raised during discussion\"}", "{\"title\": \"My bad--his title is Senior Program Chair\", \"comment\": \"My bad--his title is Senior Program Chair. Somehow there are too many different titles, and I thought it was the same.\"}", "{\"title\": \"Is this an apology?\", \"comment\": \"You made very serious allegations. Now you say there were some typos.\\nSince you do not apologize and do not seem to retract your statements/reviews, I will have to write to the TPC chairs and demand an investigation.\\n\\nYour review is non-sense, but as I said, I am used to nonsense. Your comments are preposterous.\\n\\nSenior author\"}", "{\"comment\": \"Dear Reviewer dgih,\\n\\nThank you for taking the time and effort to review our paper. \\n\\n**Assumption on Potential Outcome Distributions**\\n\\nYou are correct that the assumption that the potential outcomes of the RCT distribution are the same as those of the observational data potential outcomes is strong and may not hold in practice. We made this assumption for ease of mathematical analysis. However, in our experimental setting, the three real-world datasets do not satisfy this assumption, and our method still demonstrates competitive performance. Please note that this is not an uncommon practice in the causal inference literature as it is the case when asserting that treatment effects are identifiable when an RCT is available. We will clarify this point in our updated manuscript.\\n\\n**Practical Scenarios**\\n\\nAs noted by Reviewer hGG4, the scenario in which covariate information is missing from the RCT data is quite common. For instance, older RCTs often lack covariate data because such information was not collected at the time, e.g., some control trials were conducted before electronic health records and hence a lot of the now available covariates data is missing in the RCT experiment. Additionally, imposing a constraint that requires complete covariate information when selecting RCT candidates could introduce bias into the RCT data. Lastly, privacy concerns may lead RCT participants to withhold their covariate information. \\n\\n**Writing Style**\\n\\nThe goal of Section 3 was to illustrate the impact of each regularization term on the optimization problem and its solution. \\n\\nThank you for your suggestions; We will minimize the use of bold fonts.\"}", "{\"comment\": \"Dear Reviewer hGG4,\\n\\nAs we near the end of this discussion period, we wish to extend our sincere thanks for the time and effort you have invested in reviewing our work.\\n\\nThis message serves as a gentle reminder to please let us know if you have any further questions we can assist with and if you are considering adjusting your assessment of our work based on the feedback received.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper studied the problem of conditional average treatment effects (CATE) estimation. The paper proposed a new CATE estimation method in the case where in addition to a potentially large observational dataset, a small dataset from a randomized controlled trial (RCT) is also available - in particular on the outcomes are required from RCT. The proposed method is based on a pseudo-confounder generator and a CATE model which aligns the learned potential outcomes from the observational data with the outcomes observed from the RCT dataset. Numerical experiments demonstrated the effectiveness of the proposed estimation approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"this paper studied an important and practical problem of CATE estimation, where the estimator can be significantly biased due to the effects of unobserved confounders. The proposed method works in a setting where a large observational dataset and a small dataset from a randomized controlled trial are available - while only the outcomes in the RCT are required to be observed. I found this setting to be more realistic and applicable to many real-world cases.\", \"overall the paper is presented with a good clarity\", \"the two regularizations required by the estimation are model-agnostic, so the method is flexible to be applied to different CATE models, e.g. neural networks\", \"experiments with both simulation data and real-world case studies demonstrate that the proposed estimation algorithm outperform / comparable to other baselines\"], \"weaknesses\": \"There is a lack of theoretical analysis on the two regularizations, in particular quantifying the bias reduction from each of the regularization.\", \"questions\": [\"How does the sample sizes of the potentially large observable data and the small RCT dataset affect the estimation outcome?\", \"For ACTG case study, CorNet worked better than the proposed estimator. What are the potential causes for the bad performance on this particular dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 7fgj,\\n\\nAs we near the end of this discussion period, we wish to extend our sincere thanks for the time and effort you have invested in reviewing our work.\\n\\nThis message serves as a gentle reminder to please let us know if you have any further questions we can assist with and if you are considering adjusting your assessment of our work based on the feedback received.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer dgih,\\n\\nAs we near the end of this discussion period, we wish to extend our sincere thanks for the time and effort you have invested in reviewing our work.\\n\\nThis message serves as a gentle reminder to please let us know if you have any further questions we can assist with and if you are considering adjusting your assessment of our work based on the feedback received.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Answer to unjustified accusation\", \"comment\": \"Dear authors,\\n\\ndear PC, SAC, Ac,\\n\\nIf there is reason to believe my review is machine-generated or in some way uncoherent, I am happy to explain my comments in more detail. Directly at the beginning of the reviewing period, I wrote a comment to the program committee about potential ethical concerns. I thought that this comment would become public during rebuttal time. Therefore, I referred to the \\\"comment above\\\" in my review. As this is not the case, I will repost my comment below. \\n\\nOf note, the reviewers make **the same mistake** in their official answer to my review, accusing me of providing unsubstantiated, misleading, and unethical reviews. The next time the authors make such a strong statement I would highly encourage them to check on the correctness of their answer.\\n\\n*Dear AC, SAC, PC,*\\n\\n*While reviewing the paper, some ethical concerns regarding truthful scientific practice and the misuse of large language models arose. I would like to ask your opinion on this matter and the potential consequences.*\\n\\n*Specifically, the concerns arose when reading this version of a reference stated in the submitted paper:*\\n\\n*Stefan Feuerriegel, **Daniel** Frauen, **Viktoria** Melnychuk, **Julian** Schweisthal, **Katharina** Hess, Alicia Curth, **Sebastian** Bauer, Niki Kilbertus, Isaac S Kohane, and Mihaela van der Schaar. Causal machine learning for predicting treatment outcomes. Nature Medicine, 30(4):958\\u2013968, 2024.*\\n\\n*The correct reference should read:*\\n\\n*Stefan Feuerriegel, **Dennis** Frauen, **Valentyn** Melnychuk, **Jonas** Schweisthal, **Konstantin** Hess, Alicia Curth, **Stefan** Bauer, Niki Kilbertus, Isaac S Kohane, and Mihaela van der Schaar. Causal machine learning for predicting treatment outcomes. Nature Medicine, 30(4):958\\u2013968, 2024.*\\n\\n*I can only explain this uncommon mistake through the use of LLMs. I am aware that the stated issue is not extremely severe. However, it raises questions regarding the misuse of LLMs in other parts of the work.*\\n\\n*Best regards,*\\n\\n*Reviewer aUnR*\"}", "{\"title\": \"I did have a discussion with Carl Vondrick<[email protected]>\", \"comment\": \"I had a discussion on email with Carl Vondrick which the ICLR webpage calls him a Senior Area Chair\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to your wrong, unethical, careless and probably machine generated allegations/review\", \"comment\": \"Reviewer aUnR,\\n\\nWe are compelled to address several serious concerns regarding the feedback provided, which we believe are unsubstantiated, misleading and unethical. Below, we outline these issues in detail:\\n\\nYour serious, Unfounded and unethical Accusations:\\n\\nThe reviewer claims that we have included fake references, generated by LLM. This claim is false and highly unprofessional. The reviewer is making baseless and highly accusatory claims and provides absolutely no evidence for it. To clarify and demonstrate the validity of our references, we have provided a complete list of all cited papers, presented in the same order as in the references section of our manuscript, along with their corresponding links:\\n\\n[1] Ahmed M Alaa and Mihaela Van Der Schaar. Bayesian inference of individualized treatment effects\\nusing multi-task gaussian processes. Advances in neural information processing systems, 30, 2017.\", \"link\": \"https://arxiv.org/abs/2202.12891\"}" ] }
5AB33izFxP
Simultaneous Online System Identification and Control using Composite Adaptive Lyapunov-Based Deep Neural Networks
[ "Omkar Sudhir Patil", "Emily J. Griffis", "Wanjiku A Makumi", "Warren Dixon" ]
Although deep neural network (DNN)-based controllers are popularly used to control uncertain nonlinear dynamic systems, most results use DNNs that are pretrained offline and the corresponding controller is implemented post-training. Recent advancements in adaptive control have developed controllers with Lyapunov-based update laws (i.e., control and update laws derived from a Lyapunov-based stability analysis) for updating the DNN weights online to ensure the system states track a desired trajectory. However, the update laws are based on the tracking error, and offer guarantees on only the tracking error convergence, without providing any guarantees on system identification. This paper provides the first result on simultaneous online system identification and trajectory tracking control of nonlinear systems using adaptive updates for all layers of the DNN. A combined Lyapunov-based stability analysis is provided, which guarantees that the tracking error, state-derivative estimation error, and DNN weight estimation errors are uniformly ultimately bounded. Under the persistence of excitation (PE) condition, the tracking and weight estimation errors are shown to exponentially converge to a neighborhood of the origin, where the rate of convergence and the size of this neighborhood depends on the gains and a factor quantifying PE, thus achieving system identification and enhanced trajectory tracking performance. As an outcome of the system identification, the DNN model can be propagated forward to predict and compensate for the uncertainty in dynamics under intermittent loss of state feedback. Comparative simulation results are provided on a two-link manipulator system and an unmanned underwater vehicle system with intermittent loss of state feedback, where the developed method yields significant performance improvement compared to baseline methods.
[ "Adaptive control", "Online Learning", "Control Theory", "Robotics" ]
Reject
https://openreview.net/pdf?id=5AB33izFxP
https://openreview.net/forum?id=5AB33izFxP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4WsPldkx8", "tym92kgaN8", "pW6Hm5MboN", "mgnEEX0ljl", "k3LjkbTMBX", "hZLeQ9sTwt", "hFPrXdVPGO", "eqdm5F8FQK", "eDfzkAwUJn", "cB9BSp6BMg", "aVcVU4TpYJ", "ZZy2Dldi0N", "Z1iiFopMdb", "PXebsDQcOT", "LQ3taFvpiD", "KMHKtRqmMF", "J7gyAGgXd0", "ElMv7pbPAw", "6JcGR78Wo9", "5JbZnoPfak", "5ExxlSyUjd", "3G7LuBO3oq", "2Dr83iRXoD" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1732838212858, 1732644481962, 1732364936456, 1732630904108, 1732362525640, 1732499733164, 1732362505576, 1732361779503, 1730612464661, 1732562242242, 1734329536621, 1732432083765, 1732770349796, 1732906374839, 1732843035393, 1730626532848, 1732838060662, 1732365553659, 1729934128482, 1729647861375, 1732362971761, 1732520436326, 1737524200785 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_XnMo" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_XnMo" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Area_Chair_uSRK" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_N4X4" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_N4X4" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_vqH4" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_bqXC" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_N4X4" ], [ "ICLR.cc/2025/Conference/Submission12570/Authors" ], [ "ICLR.cc/2025/Conference/Submission12570/Reviewer_vqH4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"On the intersection of deep learning and adaptive control (Part 2)\", \"comment\": \"For ease of illustration, consider a simple neural network (NN) with a scalar input $x$, one hidden layer, and two scalar weights, given by:\\n\\n$\\\\Phi(x, \\\\hat{\\\\theta}) = \\\\hat{w}\\\\_{1} \\\\phi(\\\\hat{w}\\\\_{0} x),$\\n\\nwith $\\\\hat{\\\\theta} = [\\\\hat{w}\\\\_{0}, \\\\hat{w}\\\\_{1}]^{\\\\top}$ and Jacobian:\\n\\n$\\\\Phi^{\\\\prime}(x, \\\\hat{\\\\theta}) = \\\\left[ \\\\hat{w}\\\\_{1} \\\\phi^{\\\\prime}(\\\\hat{w}\\\\_{0} x) x,\\\\, \\\\phi(\\\\hat{w}\\\\_{0} x) \\\\right]^{\\\\top}.$\\n\\nFurthermore, assume perfect tracking (zero tracking error), exact state-derivative estimates (i.e., $\\\\hat{f} = f(x)$, exact ground truth information), let the forgetting factor $\\\\beta = 0$, and omit the projection operator. In this case, the least squares adaptive update is given by:\\n\\n$\\\\dot{\\\\hat{\\\\theta}} = -k_{\\\\hat{\\\\theta}} \\\\Gamma(t) \\\\hat{\\\\theta} + \\\\Gamma(t) \\\\Phi^{\\\\prime\\\\top}(x, \\\\hat{\\\\theta}) \\\\left( f(x) - \\\\Phi(x, \\\\hat{\\\\theta}) \\\\right)$\\n\\n$\\\\dot{\\\\Gamma} = -\\\\Gamma \\\\Phi^{\\\\prime\\\\top}(X, \\\\hat{\\\\theta}) \\\\Phi^{\\\\prime}(X, \\\\hat{\\\\theta}) \\\\Gamma.$\\n\\nFor Euler discretization, let the time step be $\\\\Delta t$ and denote the discrete-time values at time step $n$ by $\\\\hat{\\\\theta}[n]$ and $\\\\Gamma[n]$. Suppose the initial values are $\\\\hat{\\\\theta}[0] = \\\\hat{\\\\theta}\\\\_{0}$ and $\\\\Gamma[0] = \\\\Gamma\\\\_{0}$.\\n\\n### Step 1 ($n = 1$)\\n\\n$\\\\hat{\\\\theta}[1] = \\\\hat{\\\\theta}[0] + \\\\Delta t \\\\left( -k_{\\\\hat{\\\\theta}} \\\\Gamma[0] \\\\hat{\\\\theta}[0] + \\\\Gamma[0] \\\\Phi^{\\\\prime\\\\top}(x[0], \\\\hat{\\\\theta}[0]) \\\\left( f(x[0]) - \\\\Phi(x[0], \\\\hat{\\\\theta}[0]) \\\\right) \\\\right)$\\n\\n$\\\\Gamma[1] = \\\\Gamma[0] - \\\\Delta t \\\\Gamma[0] \\\\Phi^{\\\\prime\\\\top}(X[0], \\\\hat{\\\\theta}[0]) \\\\Phi^{\\\\prime}(X[0], \\\\hat{\\\\theta}[0]) \\\\Gamma[0]$\\n\\n### Step 2 ($n = 2$)\\n\\n$\\\\hat{\\\\theta}[2] = \\\\hat{\\\\theta}[1] + \\\\Delta t \\\\left( -k_{\\\\hat{\\\\theta}} \\\\Gamma[1] \\\\hat{\\\\theta}[1] + \\\\Gamma[1] \\\\Phi^{\\\\prime\\\\top}(x[1], \\\\hat{\\\\theta}[1]) \\\\left( f(x[1]) - \\\\Phi(x[1], \\\\hat{\\\\theta}[1]) \\\\right) \\\\right)$\\n\\n$\\\\Gamma[2] = \\\\Gamma[1] - \\\\Delta t \\\\Gamma[1] \\\\Phi^{\\\\prime\\\\top}(X[1], \\\\hat{\\\\theta}[1]) \\\\Phi^{\\\\prime}(X[1], \\\\hat{\\\\theta}[1]) \\\\Gamma[1]$\\n\\n### Observing Dependencies\\n\\n- $\\\\hat{\\\\theta}[1]$ explicitly depends on $\\\\hat{\\\\theta}[0]$, $\\\\Gamma[0]$, and the data $x[0]$ and $f(x[0])$.\\n- $\\\\hat{\\\\theta}[2]$ explicitly depends on $\\\\hat{\\\\theta}[1]$, $\\\\Gamma[1]$, and the data $x[1]$ and $f(x[1])$.\\n\\nSubstituting $\\\\hat{\\\\theta}[1]$ into $\\\\hat{\\\\theta}[2]$ reveals that $\\\\hat{\\\\theta}[2]$ also indirectly depends on $\\\\hat{\\\\theta}[0]$, $\\\\Gamma[0]$, and the old data $x[0]$ and $f(x[0])$.\\n\\nThus, the updates are recursive and inherently carry information from previous states and data. This recursion ensures that historical data is not forgotten, aligning with the principle of online learning in adaptive control. \\n\\nAdditionally, note that a controlled forgetting mechanism is important in online learning, because not all of the historical data might be useful (especially if the dynamics change over time) and it might be desirable to assign more weight to newer data. To this end, we used the bounded gain forgetting factor (i.e., $\\\\beta$) mechanism from Slotine and Li's composite adaptive control method. Specifically, the PE condition under higher levels of excitation yields a larger forgetting factor, thus giving more weight to newer data.\"}", "{\"comment\": \"We thank the reviewer for the timely response. We are happy the reviewer found our revisions regarding Assumption 2 satisfactory.\\n\\nAs for Assumption 1, it is not clear if the reviewer noticed our more recent update in Appendix F, where we added a brief illustration on extending the procedure to underactuated nonholonomic mobile robots. This example was added based on the reviewer's suggestion to provide more insights on performing the extension to an underactuated case.\\n\\nSince the reviewer agrees with the response \\\"the constructive Lyapunov-based control design procedure is unique for each underactuated system\\\", we hope the unrealisticness of the expectation to consider a general underactuated system is understood. The extension needs to be performed on a case-by-case basis. However, the process of performing such an extension mostly involves standard nonlinear control tools that are not unique to our work. Specifically, note that the composite adaptive DNN (our main contribution) part of the development in Appendix F is mostly the same as in Section 2 of the manuscript. The part of the development specific to mobile robots involved standard nonlinear control insights (i.e.,backsteppping approach well known in literature for mobile robots) from (Fierro, Lewis, 1998) work. The extension to an underactuated case did not involve any theoretical novelty. To better focus on our main contribution, we focus the main development on non-underactuated systems.\\n\\nAdditionally, we still think there is a possible misunderstanding regarding simultaneity. The reviewer is requested to clarify on the statement in the original review,\\n\\n> \\\"Assumption 1 implies that performing system identification can achieve arbitrary control performance (arbitrary system dynamics is realized by $u= g^+(r - f(x,\\\\dot{x}))$. In this sense, the problem addressed in this paper is not essentially simultaneous.\\\" \\n\\nThe briefness of the reviewer's comments is making it difficult for us to interpret what was being meant. Assumption 1 is not precluding the simultaneity of identifying $f(x,\\\\dot{x})$ and implementing the control law. The control and adaptive update laws are simultaneous. The reviewer is requested to clarify if we are on the same page with that.\\n\\n>\\\"Assumption 1 severely limits the class of the plant system to completely eliminate both the challenge (and value) of the simultaneous problem\\\"\\n\\nWe would like to bring the reviewer's attention to the challenges involved in the DNN part of the development, which is the focus of our main contribution. There were multiple challenges precluding this development, all of which are explained in great detail in the Introduction and Appendix B. Specifically, developing Lyapunov-based composite adaptive update laws for all \\nDNN layers is a challenging problem especially due to the nested and nonlinear-in-parameter structure. Specifically, the term $\\\\Phi^{\\\\prime}\\\\left(X,\\\\hat{\\\\theta}\\\\right)$ is not the standard regressor in adaptive control, but the DNN's Jacobian with respect to its vectorized weights, an analytical derivation of which is provided in Appendix C. The DNN's Jacobian is derived for an arbitrary number of layers by leveraging its recursive compositional structure. Additionally, the formulation of the prediction error term E has never been done in literature for a DNN because of technical challenges that are specific to a DNN. Slotine's result ([24] in the previous version) used a filtered regression approach to formulate the prediction error. As described in Appendix B.2, there are mathematical challenges in extending Slotine's filtered regression approach to DNNs because the approach utilized the LIP structure of the model. To circumvent those challenges, we used a dynamic state-derivative estimator to construct a new prediction error, and the corresponding estimation errors are also incorporated in the Lyapunov-based analysis to ensure closed-loop stability after combining the developed controller, adaptation laws, and state-derivative estimator.\\n\\nTherefore, even with Assumption 1, the challenges are still significant, which the other reviewers have recognized. Note that this is a very recent topic of research with only a few closely related works, e.g., (Patil et. al., 2022) and (O' Connell et. al., 2021), both highly cited works. Both of these works performed the development assuming fully-actuated dynamics. If Assumption 1 completely eliminated the challenges like the reviewer claims, this topic of research would not have emerged in the first place.\\n\\nPatil, O.S., Le, D.M., Greene, M.L. and Dixon, W.E., 2021. Lyapunov-derived control and adaptive update laws for inner and outer layer weights of a deep neural network. IEEE Control Systems Letters, 6, pp.1855-1860.\\n\\nO\\u2019Connell, M., Shi, G., Shi, X., Azizzadenesheli, K., Anandkumar, A., Yue, Y. and Chung, S.J., 2022. Neural-fly enables rapid learning for agile flight in strong winds. Science Robotics, 7(66), p.eabm6597.\"}", "{\"comment\": \"We thank the reviewer for the appreciation of our contributions, theoretical results, and presentation. Based on the reviewer's suggestion we removed the technical details in the abstract to make it more concise. Additionally, we have added the following sentence to the abstract: \\u201cunder the persistence of excitation (PE) condition, the tracking and weight estimation errors are shown to exponentially converge to a neighborhood of the origin, where the rate of convergence and the size of this neighborhood depends on the gains and a factor quantifying PE\\u201d.\\n\\n**Regarding composited disturbances**\\n\\nWe apologize for the brief statement about the previous work (O'Connell, 2022) in the introduction which may have caused confusion. To make the discussion about (OConnell et al., 2022) more precise, we have moved the discussion about this reference to the Related Work Section in Appendix B.2. For the reviewer's convenience, we are reproducing the discussion here: \\n\\n\\u201cThe recent result in (O'Connell, Shi, Shi, Azizzadenesheli, Anandkumar, Yue, and Chung, 2022) developed a new learning representation uncertainties involving a composited disturbance given by $f(x,\\\\dot{x},w)=\\\\phi\\\\left(x,\\\\dot{x}\\\\right)a\\\\left(w\\\\right)$, where $\\\\phi\\\\left(\\\\cdot\\\\right)$ denotes a basis function that is learned using a DNN and $a\\\\left(w\\\\right)$ denotes a set of linear parameters accounting for an unknown disturbance time-varying disturbance $w$. Since $f$ is linear in terms of $a$, the composite adaptive approach from (Slotine & Li, 1989) is used to design an adaptation law $\\\\dot{\\\\hat{a}}$ to update the estimates of a given by $\\\\hat{a}$. To obtain a disturbance-invariant representation of $\\\\phi$ using DNNs, a domain adversarially invariant meta-learning (DAIML) algorithm is developed to train the DNN offline. To the best of our knowledge, this is the only existing work using a composite adaptive approach in the context of deep learning-based control. However, since the DNN learning $\\\\phi\\\\left(x,\\\\dot{x}\\\\right)$ has an NIP structure, the aforementioned challenges apply for constructing a Lyapunov-based online adaptation law.\\u201d\\n\\nPlease note **this is not a criticism of (O'Connell, 2021)**; we acknowledge this method involves a fundamentally different paradigm than ours. We strongly appreciate the DAIML approach to obtain a disturbance-invariant representation in (O'Connell, 2021). Since (O'Connell, 2021) is the only existing work using a composite adaptive approach in the context of deep learning-based control, we believe it deserves to be cited and discussed in the related works section.\\n\\nFurthermore, note that our developed method can be easily extended for composited disturbances. Note that the developed method is agnostic of the DNN architecture used. Therefore, for a composited disturbance $f(x,\\\\dot{x},w)$, the representation $f(x,\\\\dot{x},w)=\\\\phi\\\\left(x,\\\\dot{x}\\\\right)a\\\\left(w\\\\right)$ from (O'Connell, 2022) can be used. The term $\\\\phi$ can be approximated as $\\\\phi\\\\left(x,\\\\dot{x}\\\\right)=\\\\Phi\\\\_\\\\{1}\\\\left(X,\\\\theta\\\\_{1}^*\\\\right)+\\\\varepsilon\\\\_\\\\{1}\\\\left(X\\\\right)$, where $\\\\Phi\\\\_\\\\{1}\\\\left(X,\\\\theta\\\\_{1}^*\\\\right)$ is a DNN with an appropriate ideal parameter $\\\\theta_1^*$, which yields $f(x,\\\\dot{x},w)=\\\\Phi\\\\_1\\\\left(X,\\\\theta\\\\_1^*\\\\right)a\\\\left(w\\\\right)+\\\\varepsilon\\\\_1\\\\left(X\\\\right)a\\\\left(w\\\\right)$. Then $\\\\theta^{\\\\*\\\\top}=[a(w)^\\\\top \\\\theta\\\\_1^{\\\\*\\\\top}]^{\\\\top}$ can be constructed as an augmented parameter to define the parameterization $\\\\Phi\\\\left(X,\\\\theta^*\\\\right)=\\\\Phi\\\\_{1}\\\\left(X,\\\\theta\\\\_{1}^*\\\\right)a\\\\left(w\\\\right)$ and $\\\\varepsilon\\\\left(X\\\\right)=\\\\varepsilon\\\\_1\\\\left(X\\\\right)a\\\\left(w\\\\right)$, which yields $f(x,\\\\dot{x},w)=\\\\Phi\\\\left(X,\\\\theta^*\\\\right)+\\\\varepsilon\\\\left(X\\\\right)$. In this case, $\\\\theta^*$ becomes a time-varying parameter. The analysis in (O'Connell, 2022) treated $a(w)$ as a bounded time-varying parameter with a bounded time-derivative. This argument can then be used to obtain that $\\\\theta^*$ and $\\\\dot{\\\\theta}^*$ are bounded. In the Lyapunov-based analysis the extra term $\\\\tilde{\\\\theta}^{\\\\top}\\\\dot{\\\\theta}^*$ would appear since $\\\\theta^*$ is time-varying. The bound on $\\\\dot{\\\\theta}^*$ can be used to conclude exponential convergence of all the error states to the neighborhood of the origin in a similar manner as in the proof of Theorem 1, except that the size of this neighborhood now also depends on the bound on $\\\\dot{\\\\theta}^*$.\"}", "{\"title\": \"Thanks for revision and response\", \"comment\": \"The reviewer agrees the response \\\"The reason we do not consider the underactuated case is because, even for underactuated systems with perfect model knowledge, there is no universal nonlinear controller that can be directly applied to every underactuated system. Instead, the the constructive Lyapunov-based control design procedure is unique for each underactuated system.\\\"\\n\\nThe reviewer correctly understands that the control input is not designed as $u=g^{+}( X -f(x,\\\\dot{x}))$ by using some X of generating arbitrary dynamics. He/she states that Assumption 1 severely limits the class of the plant system to completely eliminate both the challenge (and value) of the simultaneous problem. \\n\\nThe reviewer was satisfied with the revision regarding Assumption 2. Thank you for the update.\"}", "{\"comment\": \"References:\\n\\nF. L. Lewis, A. Yegildirek, and Kai Liu. Multilayer neural-net robot controller with guaranteed tracking performance. IEEE Trans. Neural Netw., 7(2):388\\u2013399, March 1996a. 10.1109/ 72.485674.\\n\\nR. Fierro and Frank L. Lewis. Control of a nonholonomic mobile robot using neural networks. IEEE Trans. Neural Netw., 9(4):589\\u2013600, July 1998\\n\\nShi, G., Shi, X., O\\u2019Connell, M., Yu, R., Azizzadenesheli, K., Anandkumar, A., Yue, Y. and Chung, S.J., 2019, May. Neural lander: Stable drone landing control using learned dynamics. In 2019 international conference on robotics and automation (icra) (pp. 9784-9790). IEEE.\"}", "{\"title\": \"Update: Underactuated Nonholonomic Mobile Robot System Example Added in Appendix F\", \"comment\": \"Based on the reviewer's suggestion to provide the control development for under-actuated cases, we have revised the manuscript with a detailed illustration on extending the development for a nonholonomic mobile robot in Appendix F. This extension does not require taking the pseudo-inverse according to Assumption 1 but instead achieves tracking by backstepping the kinematics into dynamics based on the development in (Fierro, Lewis, 1998) work. We hope this addresses the reviewer's concerns about Assumption 1.\"}", "{\"comment\": \"We thank the reviewer for their appreciation of the motivation of our paper and the numerical experiments. However, we respectfully disagree with the reviewer on the severity of the assumptions, which we justify as follows.\\n\\n**Regarding fully-actuated dynamics**\\n\\nMany practical nonlinear systems such as robot manipulators and UUVs used in the simulation section, Stewart platforms, hexapod robots meet this assumption. This is a broad variety of applications where the developed method can directly be applied and Assumption 1 holds mathematically and practically. The development is focused on fully-actuated systems to better focus on our unique specific contribution, i.e., the composite adaptive design for DNNs. The reason we do not consider the underactuated case is because, even for underactuated systems with perfect model knowledge, there is no universal nonlinear controller that can be directly applied to every underactuated system. Instead, the the constructive Lyapunov-based control design procedure is unique for each underactuated system. For example, the nonlinear control design for a nonholonomic mobile robot is completely different from the design for quadrotors. However, the steps required to extend the DNN-based control development from a fully-actuated system to an underactuated system mostly involves standard tools (e.g., backstepping) without any challenges specific to composite adaptive DNN development. The tools such as backstepping are well-known since 1980s, so including such an extension for specific cases would add limited theoretical value or novelty and confuse our main contribution.\\n\\n**UPDATE:** Based on the reviewer's suggestion, we have revised the manuscript with a detailed illustration on extending the development for a nonholonomic mobile robot in Appendix F. This extension does not require taking the pseudo-inverse according to Assumption 1 but instead achieves tracking by backstepping the kinematics into dynamics based on the development in (Fierro, Lewis, 1998) work. We hope this addresses the reviewer's concerns about Assumption 1.\\n\\n**Regarding simultaneity of system identification and control**\\n\\nIt appears there is possibly a misunderstanding regarding what we mean by simultaneous system identification and control. The term $f(x,\\\\dot{x})$ is unknown and not being used anywhere in the controller. The control input is not designed as $u=g^{+}(r-f(x,\\\\dot{x}))$, but $u=g^{+}(x,\\\\dot{x})(\\\\\\\\ddot{x}\\\\_\\\\{d}(t)-(\\\\alpha_\\\\{1}+k_\\\\{r})r+(\\\\alpha_\\\\{1}^{2}-1)e-\\\\Phi(X,\\\\hat{\\\\theta}))$ according to Eq. (6). The term $\\\\Phi(X,\\\\hat{\\\\theta})$ is a DNN-based adaptive estimate of $f(x,\\\\dot{x})$, updated in real-time. Specifically, the DNN parameter estimate $\\\\hat{\\\\theta}$ updates in real-time based on the adaptive update law $\\\\dot{\\\\hat{\\\\theta}}=\\\\mathrm{proj}(-k_{\\\\hat{\\\\theta}}\\\\Gamma(t)\\\\hat{\\\\theta}+\\\\Gamma(t)\\\\Phi^{\\\\prime\\\\top}(X,\\\\hat{\\\\theta})(r+\\\\alpha_{3}E))$. Therefore, the control and adaptive update laws operate simultaneously, unlike offline training methods which identify the parameter estimates a priori using training datasets. Assumption 1 or fully-actuatedness does not violate this simultaneity.\\n\\n**Regarding Assumption 2**\\n\\nThe assumption is not about the existence of $\\\\bar{\\\\theta}$ but about its knowledge. Note that $\\\\bar{\\\\theta}$ is a bound on the unknown constant ideal parameter $\\\\theta^*$, so $\\\\bar{\\\\theta}$ exists because $\\\\theta^*$ is a constant. This is a standard mathematical assumption in NN-based adaptive control literature (e.g., see (Lewis, 1996)). As far as the approximation accuracy $\\\\bar{\\\\varepsilon}$ is concerned, its existence is not an assumption but a fact because due to the universal function approximation property of DNNs. In fact, even without the universal function approximation property, a $\\\\overline{\\\\varepsilon}>0$ satisfying $\\\\sup_{X\\\\in\\\\Omega}\\\\left\\\\Vert f(x,\\\\dot{x})-\\\\Phi(X,\\\\theta^{*})\\\\right\\\\Vert \\\\leq\\\\overline{\\\\varepsilon}$ would exist because the function $f$ and $\\\\Phi$ are continuous and therefore bounded over the compact set $\\\\Omega$. The universal function approximation property further allows $\\\\overline{\\\\varepsilon}$ to be prescribed as arbitrarily small. \\n\\nBased on the reviewer's comment, we revised Section 2.1 to state, \\u201cAssumption 2 is reasonable since in practice the user can select $\\\\overline{\\\\theta}$ a priori to restrict the parameter search space. If such a selection does not obey Assumption 2, the selection may no longer allow the user to make $\\\\bar{\\\\varepsilon}$ arbitrarily small as guaranteed by the universal function approximation property. However, a bound $\\\\bar{\\\\varepsilon}$ satisfying $\\\\sup_{X\\\\in\\\\Omega}\\\\left\\\\Vert \\\\varepsilon(X)\\\\right\\\\Vert \\\\leq\\\\overline{\\\\varepsilon}$ still exists due to the continuity of $f$ and $\\\\Phi$ over $\\\\Omega$. Using heuristic approaches, if such $\\\\bar{\\\\varepsilon}$ is found to be larger than the maximum allowable error, then $\\\\bar{\\\\theta}$ can be iteratively increased until it achieves the prescribed $\\\\bar{\\\\varepsilon}$.\\u201d\"}", "{\"comment\": \"We thank the reviewer for their appreciation of the strength, timeliness, and potential impact of our contribution. Based on the reviewer's suggestion to also include more standard controllers used in robotics as baselines, we have updated the simulation section to include nonlinear MPC and nonlinear PD controllers as baselines for both examples, the robot manipulator and the UUV. The developed method was able to achieve improved tracking performance with a similar control effort compared to both of these baselines. Furthermore, while we acknowledge that the comparison with nonlinear MPC provides valuable context, it is important to note that our approach operates under a fundamentally different paradigm. Specifically, using DNN-based online system identification, our method addresses scenarios where prior model knowledge is unavailable or unreliable, highlighting its practical applicability in uncertain environments. We hope this clarifies the distinction and demonstrates the merits of our approach. In future work, we also plan to extend our approach to develop a DNN-based adaptive MPC method, where the DNN-based adaptive system identifier can update the model in real-time for improving future state predictions over the horizon.\\n\\n**Regarding nonconvex dependence**\\n\\nThe convexity condition on an underlying loss did not appear in our analysis because we used a Lyapunov-based approach instead of traditional optimization-based approaches involving a loss function. Instead, we require the persistence of excitation (PE) condition and some gain conditions that may be considered analogous to a convexity condition. Specifically, instead of imposing convexity on the DNN, we use a first-order Taylor series approximation $\\\\Phi(X,\\\\theta^{*})-\\\\Phi(X,\\\\hat{\\\\theta})=\\\\Phi^{\\\\prime}(X,\\\\hat{\\\\theta})\\\\tilde{\\\\theta}+\\\\mathcal{O}(\\\\left\\\\Vert \\\\tilde{\\\\theta}\\\\right\\\\Vert ^{2})$ as in Eq. (14) of the paper, where the higher-order term $\\\\mathcal{O}(\\\\left\\\\Vert \\\\tilde{\\\\theta}\\\\right\\\\Vert ^{2})$ is shown to be bounded when $z\\\\in\\\\mathcal{D}$ and because $\\\\left\\\\Vert \\\\tilde{\\\\theta}\\\\right\\\\Vert \\\\leq2\\\\bar{\\\\theta}$ due to the projection operator. The bound on $\\\\mathcal{O}(\\\\left\\\\Vert \\\\tilde{\\\\theta}\\\\right\\\\Vert ^{2})$ carries forward to the gain conditions. Recall that $$\\\\lambda_3=\\\\min \\\\\\\\{ \\\\alpha_{1},k_{r}-\\\\frac{\\\\gamma_{1}}{2},\\\\alpha_{2},k_{f}-\\\\frac{\\\\gamma_{2}+\\\\alpha_{3}\\\\gamma_{3}}{2},\\\\frac{k_{\\\\hat{\\\\theta}}}{2}+\\\\frac{\\\\beta_{1}}{2\\\\varkappa_{0}}-\\\\alpha_{3}\\\\gamma_{3} \\\\\\\\}$$.\\nAs mentioned in Remark 3, the gains $\\\\alpha_{1}$,$\\\\alpha_{2}$,$\\\\alpha_{3}$,$k_{r}$, and $k_{f}$ can be selected to be sufficiently high such that $\\\\lambda_{3}=\\\\frac{k_{\\\\hat{\\\\theta}}}{2}+\\\\frac{\\\\beta_{1}}{2\\\\varkappa_{0}}-\\\\alpha_{3}\\\\gamma_{3}$; thus, $ \\\\frac{\\\\lambda_{2}c}{\\\\lambda_{1}\\\\lambda_{3}}=\\\\frac{\\\\lambda_{2}}{\\\\lambda_{1}}\\\\left(\\\\frac{\\\\gamma_{1}+\\\\gamma_{2}+k_{\\\\hat{\\\\theta}}\\\\bar{\\\\theta}^{2}+\\\\alpha_{3}\\\\gamma_{3}\\\\gamma_{1}^{2}}{k_{\\\\hat{\\\\theta}}+\\\\frac{\\\\beta_{1}}{\\\\varkappa_{0}}-2\\\\alpha_{3}\\\\gamma_{3}}\\\\right)$. Since the term $\\\\beta_{1}$ is quantitatively indicative of the PE condition with larger values under more excitation, the bound $\\\\sqrt{\\\\frac{\\\\lambda_{2}c}{\\\\lambda_{1}\\\\lambda_{3}}}$ becomes tighter under more excitation and can be arbitrarily decreased under higher gains and more excitation. Under the absence of PE, a bound is still obtained by selecting an appropriate high $k_{\\\\hat{\\\\theta}}>2\\\\alpha_{3}\\\\gamma_{3}$.\\n\\n**Regarding Input and State Constraints**\\n\\nThe input and state constraints can be incorporated using control barrier functions (CBFs) (Ames et. al., 2016). Specifically, the control input from the developed method can be passed as a nominal control input through a CBF-based safety filter involving a quadratic program with CBF-based constraints. However, exploring input and state constraints is beyond the scope of this work as the CBF-based safety constraints are either difficult to obtain or too conservative under the lack of prior model knowledge. We plan to investigate DNN-based adaptive CBFs in future work, where safety constraints can be gradually made less conservative as the DNN learns the system dynamics online. Based on the reviewer's comment, we have added the following sentence in Section 6 to discuss future work:\\n\\n\\\"Furthermore, extensions of the developed online system identification approach in optimization-based control paradigms such as MPC and reinforcement learning can be explored. Moreover, future research efforts can also investigate how to combine the developed method with control barrier functions to satisfy state and input constraints.\\\"\\n\\n**References:**\\n\\nAmes, A.D., Xu, X., Grizzle, J.W. and Tabuada, P., 2016. Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8), pp.3861-3876.\"}", "{\"summary\": \"This paper addresses a methodology for simultaneously performing online system identification for the plant system and adaptation in the feedback control logic. Under some technical assumptions, stability conditions are presented, more specifically, the asymptotic stability of the equilibrium point of the entire feedback control system is ensured.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The simultaneous approach to online system identification and adaptation in the control logic, addressed in this paper, is well-motivated and justified with attractive numerical experiments. In addition, as claimed by the authors, the convergence analysis for the identification error and control error is novel.\", \"weaknesses\": \"Assumptions 1 and 2 are mathematically severe. While this paper claims its contribution lies in simultaneous system identification and control, Assumption 1 implies that performing system identification can achieve arbitrary control performance (arbitrary system dynamics is realized by u= g^+(r - f(x,\\\\dot{x})). In this sense, the problem addressed in this paper is not essentially simultaneous.\\n\\nFurthermore, there are too many technical assumptions on the modeling accuracy, meaning the existence of \\\\bar{vareplison}, \\\\bar{\\\\theta}, etc.\", \"questions\": \"Could you relax Assumptions 1 and 2? In particular, as commented in Weakness, Assumption 1 is mathematically (and practically) severe. The authors comment that the developed methods can be extended to underactuated systems. However the details of the extension are scarcely explained, and no theoretical analysis is provided. The reviewer believes that the extension to underactuated cases and its convergence analysis should be the main contribution of this paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for the helpful review which has significantly improved the quality of our paper! Indeed, MPC with mismatch learning can be adapted to such a situation. We will keep this in mind while investigating future extensions of our work to the MPC paradigm. Thanks for this information!\"}", "{\"metareview\": \"This paper demonstrates a result for simultaneous identification and stabilization of a control system with a deep neural network. The reviewers were generally positive, but with one reviewer leaning towards rejection. The positive reviewers were rather lacking in detail, whereas the negative reviewer demonstrated greater expertise. In addition, reviewers did not actively participate in discussion.\\n\\nTherefore, I decided to intervene as meta-reviewer to assess the paper on its own merits. \\n(a) this paper establishes a simultaneous tracking and stabilization for nonlinear systems. They claim this is novel, because classical results only apply to linear parameter uncertainty. \\n(b) This paper mostly makes good on its claim\\n(c) I believe the presentation of the paper is below the standards of ICLR, and will be somewhat impenetrable for the community. For one, results rely on a number of terms which are defined haphazardly throughout. Second, certain constants (e.g. the Taylor expansion should depend on the smoothness of G) are omitted. Moreover, NNs are typically highly non-smooth, so framing arguments as Taylor expansions seem misleading. Third, the authors claimed limitation of Slotine and Li, '89 is that they require a low pass filter, which necessitates LPV systems. But I do not seem to see any form of filtering in the paper, and indeed the authors seem to act as if they have access to time derivatives. Moving beyond this point is quite poorly explained. \\n(d) Ultimately, it seems that the theoretical of this content has little to do with neural networks at all, and should be about tracking/estimation with non-LPV uncertainty. The presentation has great room for improvement, and the reviewer who appeared most knowledgable also seems to side with rejection.\", \"constructive_suggestions_to_the_authors\": \"(a) rewrite proofs, exposition, clean up theorem statements and remove excess and unnecessary terms. (b) I would suggest submitting to a revised manuscript to a controls venue, where the work can be appropriately evaluated by a community with a more extensive knowledge of nonlinear adaptive control.\", \"additional_comments_on_reviewer_discussion\": \"The negative reviewer concern still remained.\"}", "{\"title\": \"Summary of Changes\", \"comment\": \"We are grateful to all the reviewers for their positive and constructive feedback. We appreciate that all reviewers acknowledge the contribution of our work: **\\\"the paper is on a very timely topic\\\", \\\"a principled treatment of a learning-based controller is an impactful contribution\\\", \\\"introducing a controller which allows for updates in all layers, in contrast to the state of the art where only the last layers can be updated, and where system dynamics is not considered explicitly, is a big contribution\\\"** (reviewer vqH4), **\\\"... is well-motivated and justified with attractive numerical experiments\\\"** (reviewer XnMo), **\\\"This is first application of the Jacobian of the DNN to develop simultaneous online system identification and control. The theoretical content of the paper is good. The literature research is sufficient.\\\"** (reviewer bqXC) , **\\\" some novel theoretical results are developed in this paper with rigid proofs. The presentation is also clear\\\"** (reviewer N4X4).\\n\\nAll changes in the paper have been marked in blue. For the convenience of the reviewers, we are providing a summary of revisions in the manuscript below.\\n\\n**1. More baselines:** The robot manipulator and UUV simulations (Sections 4.1 and 4.2, respectively) are now expanded with nonlinear MPC, nonlinear PD, and observer-based disturbance rejection controllers as additional baselines for simulations.\\n\\n**2. Measurement Noise:** All of the simulations were performed again incorporating state measurement noise to make the simulations more realistic. Sections 4.1 and 4.2 have been updated accordingly.\\n\\n**3. More Details on Simulations:** Based on the reviewers' requests, additional simulation details on control inputs, weights update, and the selected control parameters along with their selection strategy are provided in Appendix D.\\n\\n**4. Expanded Justification of Assumptions:** More discussion is added in Section 2 to justify Assumption 1 and 2. \\n\\n**5. Further Discussion on Related Works:** A more detailed discussion regarding the related work (O'Connell, 2022) is added in Appendix B.2. \\n\\n**6. Modified Abstract:** The abstract is made more concise and the factors that determine the upper bounds of convergence sets are provided.\\n\\n**7. Future Work:** More discussion is added in Section 6 on possible future extensions of this work to incorporate state and input constraints.\\n\\n**8. Extension to underactuated case of nonholonomic mobile robot:** An extension of the development to nonholonomic mobile robots is added in Appendix F.\\n\\n---\\n**Summary of Discussion Period**\\n\\n1. **Reviewer vqH4** suggested incorporating comparisons with more standard baselines, such as Model Predictive Control (MPC), and asked questions about convexity. In response, we **added comparisons with nonlinear MPC and nonlinear PD as baselines** and provided detailed explanations of how conditions analogous to convexity emerge in the form of Persistent Excitation (PE) and gain conditions in the Lyapunov analysis. The reviewer was satisfied with these additions and **raised the score to 8**.\\n\\n2. **Reviewer XnMo** critiqued Assumptions 1 and 2, requesting more justification and **details on extending the method to underactuated systems** along with a theoretical analysis. We addressed these concerns by providing stronger justifications for the assumptions and clarifying that **theoretical extensions of any controller to underactuated systems are not universal but inherently case-specific**. To illustrate the process of such extension, we **included an extension to a nonholonomic mobile robot** in Appendix F. While the reviewer was satisfied with our justification for Assumption 2, they have not yet commented on the extension for mobile robots. Furthermore, the reviewer stated Assumption 1 eliminates the challenges, which we disagree with. **The challenges we addressed are in the DNN all-layer update part, our key contribution**, which the reviewer did not comment on yet.\\n\\n3. **Reviewer bqXC** recommended incorporating measurement noise in the simulations and adding more experimental details. **We revised the manuscript accordingly.** Additionally, the reviewer inquired whether the method could handle strongly nonlinear systems like quadcopters. We affirmed that it could, noting that the Unmanned Underwater Vehicle example in our work includes nonlinear dynamics comparable to those of a quadcopter. The reviewer acknowledged our rebuttal, remarked that 6 is an appropriate rating for our current manuscript, and gave constructive feedback on improving the quality of our work in future.\\n\\n4. **Reviewer N4X4** suggested improvements to the related work section, simulations, and abstract. We implemented most of their recommendations in our revisions. The reviewer raised follow-up questions about the **connections between deep learning and adaptive control** in the context of our contribution. Our detailed responses convinced the reviewer, leading to an **increased score of 8**.\"}", "{\"title\": \"Thanks for the reply\", \"comment\": \"Thank you for the authors' responses\\uff01 Most of my comments have been addressed. I would like to discuss this 'control or learning' point further with the authors. The authors claim that the presented scheme represents an intersection of adaptive control and deep learning. However, one key advantage of deep learning is its ability to extract valuable insights from lots of historical data. In contrast, the proposed adaptive strategy relies solely on the current feedback state to update all layers. This suggests that the learned neural network is only applicable to the present moment, which raises questions about its alignment with deep learning principles, aside from the network structure as its adaptive regressor/basis. It's curious what the author would think about this.\\n\\nI acknowledge the valuable theoretical contribution, offering significant value from an adaptation perspective. I believe that conducting future real-world experiments will prove its validity and greatly enhance its contributions. I keep the score.\"}", "{\"comment\": \"Thanks again for the thorough review, constructive feedback, and insightful discussion which has helped us improve the quality of the paper and deepen our understanding of this field. We completely agree that offline learning is not always a drawback.\"}", "{\"title\": \"Thanks for this response\", \"comment\": \"Thanks for the careful reply! I am now aware of the relevance of the proposed approach to deep learning. I have increased the score to 8.\\n\\nIn addition, offline learning is not always a drawback, such as the continuing learning of animals. I am looking forward to physical verification!\"}", "{\"summary\": \"This paper is introducing an adaptive DNN-based controller. Standard adaptive DNN-based controllers are based on Lyapunov-based analysis and allow updates on the last layers of the NNs only. They only use the tracking error as a metric to indicate when adaptation is needed, and only provide guarantees on the tracking error convergence. This paper introduces a dual (composite) method, for continuous system identification combined with trajectory tracking, and guarantees that the tracking error, state-derivative estimation error, and DNN weight estimation errors are uniformly ultimately bounded. The last two reflect identifying the dynamics of the system.\\nThe system identification is performed via a dynamic state-derivative estimator and under the assumption of persitence of excitatiton. The controller is evaluated in simulation on a two-link manipulator system and an unmanned underwater vehicle system with intermittent loss of state feedback, and shows improvement compared to baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is on a very timely topic, DNN-based control. With the emergence of AI-based approaches, a principled treatment of a learning-based controller is an impactful contribution. Introducing a controller which allows for updates in all layers, in contrast to the state of the art where only thelast layers can be updated, and where system dynamics is not considered explicitely, is a big contribution.\\nGiven the complexity of the paper, it is very clearly presented. The simulation examples are relevant and not just datasets, but dynamical systems.\", \"weaknesses\": \"1. In the simulations, there is only comparison to DNN-based controllers.\\nIt would highly interesting to see how is the peroformance, compared to more standard controllers used in robotics (for example, MPC-based, or RL-based controllers). I am willing to raise my rating if this is added.\", \"questions\": \"1. Shouldn\\u2019t there be an assumption for the activation functions to be convex? How do you deal with the nonconvex dependence of the underlying loss function on weights of hidden layers? Is it playing a role?\\n\\n2. How can input and state constraints be integrated in the proposed approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"On the intersection of deep learning and adaptive control (Part 1)\", \"comment\": \"We thank the reviewer for the constructive feedback and appreciation of our work. We wholeheartedly agree that conducting future real-world experiments will prove its validity and greatly enhance its contributions, and we plan on doing so.\\n\\nThe reviewer has raised an intriguing question about the alignment of deep learning principles with our adaptive design. We would like to take this opportunity to explain the existing connections between the core principles of deep learning and adaptive control. The core ideas in deep learning and adaptive control are both rooted in the model-based regression problem. Traditional deep learning typically involves optimization methods such as stochastic gradient descent or ADAM to minimize the loss function offline over prior datasets to perform the regression. Least squares adaptive control essentially uses online gradient descent to minimize the least squares loss over the data encountered online so far by the system. For more details, the reviewer is referred to the recent work (Gaudio et. al., 2019) which demonstrated the equivalence between the machine learning and adaptive control paradigms. \\n\\nIn a sense, the least squares adaptive approach is a recursive/continuous-time implementation of batch least squares update. Although it might appear from the adaptive update law that it is solely relying on the current state information, this appearance is misleading, especially in the case of least squares update. The historical data encountered by the system shows up implicitly in the weight estimates due to the previous state values, and the previously encountered data is not forgotten. Please see Part 2 of our response for an explicit mathematical illustration. \\n\\nThe key takeaway is, adaptive control methods address the same underlying model-based regression/parameter estimation problem. Due to guarantees on accurate parameter estimation under the PE condition, the obtained model can generalize well-beyond the current or even off-trajectory datapoints. This is why in our simulations on the two-link, our developed method was able to achieve a good model generalization on the test dataset involving off-trajectory datapoints. Similarly, in the case of the UUV, we showed that the identified DNN model can be used open-loop to make predictions when the state feedback is lost. The open-loop controller using the DNN identified with our method was effective despite the loss of feedback due to its ability to generalize on newer data.\\n\\nNote that, in a sense, the PE condition is indicative of exploration or the richness of data encountered by the system online. Similar to how traditional deep learning methods would extract valuable information from the historical data, our method leverages PE condition to achieve accurate parameter estimation, thus enabling system identification and model's ability to generalize beyond the encountered trajectory. Furthermore, as stated before, the existing historical data can still be used to initialize the DNN by offline training, and the discussed meta learning based approaches are valuable in this regard. We believe such an approach might reduce the burden of online exploration and would be an interesting research direction for future work. The void that this work fills is the continued learning after task execution. Over the lifetime of the task execution, all the data is continuously being embedded within the DNN from the actual operating environment and operating conditions.\", \"references\": \"Gaudio, J.E., Gibson, T.E., Annaswamy, A.M., Bolender, M.A. and Lavretsky, E., 2019, December. Connections between adaptive control and optimization in machine learning. In 2019 IEEE 58th Conference on Decision and Control (CDC) (pp. 4563-4568). IEEE.\"}", "{\"comment\": \"**Revisions in Simulations**\\n\\nBased on the reviewer's suggestion of adding measurement noise to make the simulations more realistic, we performed all of the simulations again with measurement noise, and the results in the simulation section are now updated accordingly. Additionally, although the reviewer is correct that high gains can amplify measurement noise, this is true for all feedback controllers in general; the tradeoff between tracking performance, control effort, and noise sensitivity is well-known. Notably, instead of using numerical derivatives in our update laws, we use a dynamic state-derivative estimator which is shown to be robust to noise in Appendix A.2. As shown in the proof, the ultimate bound on state-derivative estimation error grows linearly with the bound on the noise $\\\\bar{\\\\delta}$.\\n\\nFurthermore, based on the reviewer's suggestion, we have also included the state-derivative observer-based disturbance rejection controller as a baseline for the two link manipulator. The gain selection strategy for ensuring fairness is described in Appendix D. For a fair comparison, the set of gains common to the developed and baseline methods were selected to be exactly the same. We did not perform a comparison with the composite adaptive method in (Slotine and Li, 1989) because the comparison would not be fair in our opinion. The method in (Slotine and Li, 1989) has an unfair advantage because it uses regressor information based on the structural knowledge of the manipulator dynamics. The DNN being a black box model does not have access to this information, so Slotine and Li's method is expected to perform better. This is for similar reasons as why a feedback linearization controller using the exact knowledge of $f(x,\\\\dot{x})$ would be expected to perform better than Slotine and Li's method or our method.\\n\\n**Response to other queries**\\n\\nThe proposed framework can be easily combined with offline learning by initializing the DNN weight estimates with pre-trained weights. In fact, initializing with pre-trained weights (e.g., especially using meta-learning approaches such as (O'Connell, 2021)) would likely further improve the controller performance. However, to better showcase the online learning performance with the composite adaptation law, we initialized the DNN with completely random weights in our simulations.\\n\\nAlthough it might appear the update strategy involves only the current state measurements, the least-squares update law does actually involve a historical data-mining procedure. Specifically, notice the least squares adaptation gain matrix $\\\\Gamma(t)$ evolves according to Eq. (12) in the manuscript, which reads $\\\\frac{d}{dt}\\\\Gamma^{-1}(t)=-\\\\beta(t)\\\\Gamma^{-1}(t)+\\\\Phi^{\\\\prime\\\\top}\\\\left(X,\\\\hat{\\\\theta}\\\\right)\\\\Phi^{\\\\prime}\\\\left(X,\\\\hat{\\\\theta}\\\\right)$. This equation reveals that the adaptation gain matrix $\\\\Gamma(t)$ implicitly incorporates information from all past data points through the dynamics of $\\\\Gamma^{-1}(t)$, which accumulate contributions from the regressor matrix $\\\\Phi^{\\\\prime\\\\top}\\\\left(X,\\\\hat{\\\\theta}\\\\right)\\\\Phi^{\\\\prime}\\\\left(X,\\\\hat{\\\\theta}\\\\right)$ over time. As a result, the update law inherently processes historical data by integrating its influence into the adaptation dynamics, thereby enabling the least-squares framework to adapt based on both current and prior system behaviors.\\n\\nRegarding the comment \\u201cIt is a kind of traditional adaptive control, instead of modern data-based learning. Maybe a control journal is more applicable to this paper\\u201d, we would like to point that our work lies at the intersection of adaptive control and deep learning. Specifically, the use of deep neural networks makes our work different from traditional adaptive control. Furthermore, ICLR explicitly invites contributions involving applications to robotics, autonomy, planning (which is the primary area of this submission).\", \"references\": \"O\\u2019Connell, M., Shi, G., Shi, X., Azizzadenesheli, K., Anandkumar, A., Yue, Y. and Chung, S.J., 2022. Neural-fly enables rapid learning for agile flight in strong winds. Science Robotics, 7(66), p.eabm6597.\\n\\nSlotine, J.J.E. and Li, W., 1989. Composite adaptive control of robot manipulators. Automatica, 25(4), pp.509-519.\"}", "{\"summary\": \"This paper provides the first result on simultaneous online system identification and trajectory tracking control of nonlinear systems using adaptive updates for all layers of the DNN. The Lyapunov-based stability analysis is provided, which guarantees that the tracking error, state-derivative estimation error, and DNN weight estimation errors are uniformly ultimately bounded.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is first application of the Jacobian of the DNN to develop simultaneous online system identification and control. The theoretical content of the paper is good. The literature research is sufficient.\", \"weaknesses\": \"The practical application of this method requires high computing resources and is not suitable for personal computers. The presence of measurement noise does not seem to be considered in the two simulation tests, which is unreasonable. Control inputs of two simulations should also be presented. Moreover, the nonlinear dynamics of the selected simulation system is weak.\", \"questions\": \"1)\\tThe existence of measurement noise should be considered, which is common in practical engineering; Please provide the control inputs of two simulation tests;\\n2)\\tI wonder if this method is effective for highly dynamic systems like quadcopters?\\n3)\\tProvide more experimental details, such as control inputs, weights update, and the selected control parameters.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a kind of Lpapunov-based adaptive framework, that can update all layers of DNN. The proposed method can handle nonlinear-in-parameters uncertainties. Moreover, a dynamic state-derivative estimator is utilized to obtain the state-derivative information. Overall, some novel theoretical results are developed in this paper with rigid proofs. The presentation is also clear. However, some drawbacks exist and many improvements can be further considered. There are some inappropriate statements and comparisons. The simulation tests are not enough to show its efficiency. Please refer to below for more details.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The main contribution is that a Lpapunov-based adaptive framework is proposed to update **all layers** of DNN.\\n2. Rigid convergence analysis.\\n3. Two applied examples, despite only numerical simulations.\", \"weaknesses\": \"1. The Abstract is too long, preventing the reader from capturing key points quickly. It is recommended to only highlight the main contributions in the Abstract and technique details can be removed.\\n\\n2. It is claimed that the tracking, state-derivative, and weight estimation errors can be guaranteed to converge to bounded sets. **The factors that determine the upper bounds** of convergence sets should be provided in the Abstract.\\n\\n3. The previous work (OConnell, 2022) is compared in the Intro. It is claimed the limitation of the composite adaptive approach used by OConnell, 2022, is the inner-layer weight cannot be online updated. However, the considered case of OConnell, 2022 is different from the one of this paper. OConnell mainly focuses on a composited disturbance, which comes from external disturbance and internal state-related uncertainties. The last layer of DNN is updated online to handle external disturbances, while the inner layers correspond to internal state-related uncertainties, which would not change in application. However, the internal state-related uncertainty is mainly considered in this paper, i.e., $f({x, \\\\dot{x}})$. **The direct comparison with (OConnell, 2022) is inappropriate**.\\n\\n4. One important problem is only simulation examples are demonstrated in this paper, and no noises exist in the measured states, despite the theorems that seem to be relatively complete. A small upper bound of the convergence set depends on a large gain. However, the gain may enlarge the noises in a real system. Thus, **the effect of the real application is questionable**.\\n\\n5. In the simulation of two link manipulators, it is recommended to cover the ESO comparison and the composite adaptive method developed in (Slotine and Li, 1989). The gain selection strategy for all comparison methods should be provided to ensure fairness.\", \"questions\": \"1. If the proposed framework could be combined with offline learning? It seems the proposed update strategy only relies on current measurements and has no historical data-mining procedure. It is a kind of traditional adaptive control, instead of modern data-based learning. Maybe a control journal is more applicable to this paper.\\n\\n2. If the considered uncertainty $f({x, \\\\dot{x}})$ can be extended to the composited disturbance, like $f({x, \\\\dot{x}, d})$, where $d$ denotes the external disturbance. It will be valuable in real applications.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their appreciation of the novelty, theoretical contents, and literature research. While we acknowledge that updating all layers of the DNN in real-time may require high computational resources, many real-world applications, especially those in industrial and robotics settings, have access to more powerful computational platforms, where such methods can be feasibly deployed.\\n\\nBased on the reviewer's suggestion, we performed all of the simulations again with measurement noise, and the results in the simulation section are now updated accordingly. We have also included the control input plots in both simulation examples according to the reviewer's request. \\n\\nRegarding the comment about the nonlinear dynamics in simulations being weak or the question whether this method is effective for highly dynamic systems like quadcopters, please note that the dynamics of the UUV in Section 4.2 (described in further detail in Appendix D.2) are similar to the quadcopter dynamics (see Eq. (1) in (O' Connell et al, 2022) for example). Such similarity is because both systems have an Euler-Lagrange dynamics resulting from the dynamics of rigid body translation and rotation. The UUV dynamics contains strong nonlinearities resulting from centripetal-Coriolis effects and hydrodynamic damping effects. So the developed method is indeed effective for highly dynamic systems.\\n\\nThe additional simulation details on control inputs, weights update, and the selected control parameters are provided in Appendix D.\", \"references\": \"O\\u2019Connell, M., Shi, G., Shi, X., Azizzadenesheli, K., Anandkumar, A., Yue, Y. and Chung, S.J., 2022. Neural-fly enables rapid learning for agile flight in strong winds. Science Robotics, 7(66), p.eabm6597.\"}", "{\"comment\": \"Thank you for providing the simulations comparing to nonlinear MPC and nonlinear PD. Indeed, DNN-based methods would address the situation with limited model knowledge; however, standard control methods could also be adapted to such a situation (MPC+mismatch learning, etc.).\\nThe clarification about the nonconvex dependence is useful.\\nI am increasing my rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
59r0ntInvF
Haste Makes Waste: Teaching Image Restoration to Learn Distributions from Pixels to Patterns
[ "Enxuan Gu", "Hongwei Ge", "Chen Zhao", "Yong Guo" ]
In this paper, we revisit the image restoration (IR) task and propose a new training strategy that models the IR problem as a distribution mapping challenge from two perspectives, i.e., (1) the intra-pixel regression and (2) the inter-pixel interaction. At the beginning of optimization, due to the pattern distribution involving a group of pixels within a neighborhood, it is not very easy for the model to capture such multi-pixel distribution mapping. A more optimal solution would be firstly teaching the model to learn a relatively simple yet important distribution w.r.t the pixel-by-pixel mapping between the degraded/clean pixels, as warming up. By doing so, the learned distribution is served as a prior, regarded as an injection of a kind of inductive bias into the model's whole optimization procedure. Subsequently, as conventional, the model is shifted to focus on the mapping distribution of the cross-pixel patterns, which ensures the consistency and fidelity of the image patterns. The final learned mapping is a joint distribution, which transfers the knowledge from the pixel distributions to the pattern ones. Experimental results indicate that under the compact and elegant training paradigm, the newly learned joint distribution is closer to the ideal one and yields a stronger representation ability, to circumvent the dilemma of the difficulty for existing methods to learn the patterns mapping distribution between degraded/clean images right off the bat.
[ "Image Restoration", "Low-level Vision", "Training Strategy" ]
https://openreview.net/pdf?id=59r0ntInvF
https://openreview.net/forum?id=59r0ntInvF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vESQq11h0y", "amktnFCLXc", "H929pI4t5K", "5hxr1fMNH5" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731578820656, 1730112817111, 1730642096645, 1730298138608 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4346/Authors" ], [ "ICLR.cc/2025/Conference/Submission4346/Reviewer_NRqa" ], [ "ICLR.cc/2025/Conference/Submission4346/Reviewer_EVEX" ], [ "ICLR.cc/2025/Conference/Submission4346/Reviewer_CGJN" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely appreciate the detailed and constructive feedback from all three reviewers. However, certain technical details of our work may not have been fully appreciated. After careful consideration, we have decided to withdraw our paper.\"}", "{\"summary\": \"This paper proposes a new training strategy for image restoration tasks by modeling from both intra-pixel and inter-pixel perspectives. This approach enhances the network's performance without requiring additional training data or time.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The proposed approach enhances the network's performance without requiring additional training data or time.\", \"weaknesses\": \"1. The method lacks sufficient innovation and resembles more of a trick, which is not enough to support a paper at ICLR.\\n2. The authors retrained all comparison methods during their experiments, yet the reported results for these methods are significantly lower than those in the original papers. This raises uncertainty as to whether the proposed method is only effective under the specific training conditions used by the authors. Even under these conditions, the performance improvement is minimal.\\n3. The authors claim their method is designed for image restoration tasks, but the experiments only include image denoising and deblurring. There is a lack of experiments on other image restoration tasks, such as image super-resolution and deraining.\", \"questions\": \"Train the network using the same settings as the comparison method to see if the performance can surpass the comparison method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new training strategy for image restoration (IR) tasks. The strategy, named TRAPS (InTRA-patch Pixel-Shuffle), addresses the IR problem by modeling it as a distribution mapping challenge from two perspectives: intra-pixel regression and inter-pixel interaction. The method starts by teaching the model to learn a simpler pixel-by-pixel distribution, which serves as a prior and inductive bias, and then transitions to learning cross-pixel pattern distributions. The proposed approach aims to improve the model's ability to learn complex pattern mappings between degraded and clean images by breaking down the learning process into more manageable stages.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a novel training approach that addresses the complexity of learning pattern distributions in IR by breaking it down into simpler stages, which is a creative solution to a known challenge in the field.\\n\\n2. The method is evaluated extensively on benchmark datasets, demonstrating consistent improvements across different models and tasks, which speaks to the robustness of the approach.\\n\\n3. TRAPS can be integrated into existing supervised IR methods without additional burden, making it a versatile tool that can potentially benefit a wide range of IR models.\\n\\n4. The paper provides a theoretical justification for the training strategy by modeling the IR task as an optimization problem involving distribution mapping, which adds depth to the understanding of IR processes.\", \"weaknesses\": \"1. The paper does not discuss the potential for overfitting, especially since the model is learning from a shuffled pixel distribution, which could lead to different characteristics compared to natural image statistics.\\n\\n2. Although the method shows good prospects in IR tasks, it is not clear how well it can be generalized to other low-level vision tasks. Because the method proposed by the author is very simple to implement and the theory is simple, sufficient experiments are the premise to prove its effectiveness. At present, only two tasks do not seem to be enough to prove its effectiveness and scalability. Other common restoration tasks are also necessary, including: image super-resolution, image dehazing, image deraining, low-light enhancement, etc.\\n\\n3. The method proposed in the article is simple and effective. But what is its computational cost for the network? If the repair network is larger, will the computational cost of this method also increase? This part should be further analyzed.\\n\\n4. Are there other visualizations and more detailed theoretical justifications that could further support the proposed optimization directions?\\n\\nSince the article is quite interesting and has a new perspective, I will make corresponding changes based on the author's rebuttal.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper revisits the infrared task by identifying its fundamental pixel-to-pixel regression nature and modeling it as an optimization problem from both intra-pixel and inter-pixel perspectives. It proposes a novel training strategy tailored to these observations, serving as a free data augmentation method or a warm-up approach for training. This paper\\u2019s strategy can be seamlessly integrated into existing supervised IR methods without additional burden, effectively introducing an inductive bias that enhances model performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Enhanced Performance**: The proposed training strategy introduces an inductive bias that can significantly boost the performance of existing supervised infrared (IR) methods, leading to improved accuracy in pixel-to-pixel regression tasks.\\n\\n**Seamless Integration**: This paper's strategy can be easily incorporated into current IR frameworks without requiring major modifications, allowing researchers to adopt the method with minimal effort and disruption.\\n\\n**Versatile Application**: By functioning as both a free data augmentation technique and a training warm-up approach, the proposed strategy provides flexible options for enhancing model training, making it applicable in various IR contexts.\", \"weaknesses\": \"**Limited Early-Stage Effectiveness**: The TRAPS strategy relies on a gradual transition from intra-pixel to inter-pixel optimization, which may delay capturing complex content distributions early in training, potentially leading to slower initial convergence.\\n\\n**Dependency on Pre-Generated Indices**: The need to shuffle pixels according to pre-defined indices might introduce constraints, as it requires careful setup and may limit flexibility in adapting to varying IR tasks or datasets.\\n\\n**Potential for Overhead in Warm-Up Phase**: Although designed to streamline optimization, the warm-up phase might add computational overhead, as the network initially focuses on simplified pixel mappings, which could lengthen the overall training duration in some cases.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
599F4CZ0HB
Bench-O-Matic: Automating Benchmark Curation from Crowdsourced Data
[ "Tianle Li", "Wei-Lin Chiang", "Evan Frick", "Lisa Dunlap", "Tianhao Wu", "Banghua Zhu", "Joseph E. Gonzalez", "Ion Stoica" ]
The rapid evolution of Large Language Models (LLMs) has outpaced the development of model evaluation, highlighting the need for continuous curation of new, challenging benchmarks. However, manual curation of high-quality, human-aligned benchmarks is expensive and time-consuming. To address this, we introduce Bench-O-Matic, an automated pipeline that leverages LLMs to curate high-quality, open- ended prompts from large, crowd-sourced datasets, enabling continuous benchmark updates without human in the loop. We apply Bench-O-Matic to datasets such as Chatbot Arena and WildChat-1M, extracting challenging prompts and utilizing LLM-as-a-Judge for automatic model evaluation. To validate benchmark quality, we propose new metrics to measure a benchmark’s alignment with human preferences and ability to separate models. We release Eval-O-Matic, a benchmark consisting 500 challenging prompts curated by Bench-O-Matic. Eval-O-Matic provides 3x higher separation of model performances compared to MT-Bench and achieves 98.6% correlation with human preference rankings, all at a cost of $20. Our work sets a new framework for the scalable curation of automated benchmarks from extensive data.
[ "LLM", "Evaluation" ]
Reject
https://openreview.net/pdf?id=599F4CZ0HB
https://openreview.net/forum?id=599F4CZ0HB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zx02MwSxQD", "mgVxfcKsL0", "mRLsrJMUBX", "jXfPGAnOcZ", "jR68jMQEPQ", "iQtKQekOUH", "g9BM11mVhu", "fzoJRq0uI0", "fnoK0lfDk9", "WMAEG9XbEZ", "So1gSG5S1p", "RoLZjYmUwW", "Lgg0hY9qN3", "LK0sNkgQ6m", "KdGonTm6Hw", "IDxHwieXzT", "GO0XrLx81f", "G7qxo04cMW", "2umWA4RiFK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732207412827, 1731926122559, 1731940093661, 1731928284501, 1733167723476, 1732405044803, 1731928306893, 1732111819218, 1732262550805, 1729530085496, 1731920181168, 1732630417573, 1737524269819, 1731925326515, 1730785996997, 1730698041661, 1731973240159, 1734787667963, 1732262525096 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_3LVj" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_5rNy" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_5rNy" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_5rNy" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_5rNy" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_EWC5" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_3LVj" ], [ "ICLR.cc/2025/Conference/Submission13585/Reviewer_3LVj" ], [ "ICLR.cc/2025/Conference/Submission13585/Area_Chair_VwvK" ], [ "ICLR.cc/2025/Conference/Submission13585/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I do agree that the impact is limited in terms of technical research contributions. However, in my opinion, a paper in the 'Datasets and Benchmarks' category should not be expected to make a large technical contribution, but it should provide a tool or methodology that has the potential to support and advance future research. The strength of this work lies in its solid execution, relevance to current research needs, and potential to become a widely used tool.\\n\\nFrom my understanding, the generalizability of the pipeline depends on the diversity of the topics in the underlying dataset. If the curated dataset includes questions related to other domains like scientific agents, molecules/proteins, or code generation, the pipeline should adapt and create the appropriate topic clusters, provided the necessary prompts exist. Maybe the authors could expand on this?\"}", "{\"title\": \"Official Response by Authors\", \"comment\": \"### Reviewer's Question\\n\\n> Is there any estimation on the error/quality of the data generated? Or using some metrics to evaluate the similarity of the generated data with the real-world data?\\n\\nWe are concerned there may be a potential misunderstanding. Our Bench-O-Matic pipeline does not generate data, but serves as a curation pipeline. It automatically identifies high-quality prompts from real-world data without modifying the prompts. Since we are applying the pipeline on real-world user prompts, the resulting datasets are still real-world user prompts. To evaluate the quality of the resulting dataset, we focus on how well it serves as a benchmark for predicting model performance with fidelity to human preferences. In our ablation, we found that unbiased random samples of real-world data produce low quality benchmarks while our targeted sample of the same real-world data is much higher in quality (Line 452).\\n\\nIf the reviewer is referring to the preference labels generated by the LLM judge during model evaluation, we proposed metrics to measure how well a benchmark aligns with human preferences in Section 3. Our metrics demonstrate that the curated benchmarks strongly agree and correlate with human preference leaderboards (Table 1; Table 2; Table 3). These results implies that when evaluating a model, our benchmark can accurately predict its performance on real-world preference tasks, which is the primary goal of an effective benchmark.\\n\\n---\\n\\nWe thank the reviewer for their invaluable feedback, and we sincerely hope that the reviewer would reconsider their rating in light of all our points listed in the rebuttal.\"}", "{\"comment\": \"Thanks for the feedback. Given the comments from the authors I think the claims and contributions seem to me misaligned. There seems to be several misunderstandings probably because of the contribution claims. I would stand by my statement that the approach is not configurable to the use that an end user might have in mind. Secondly, the premise of the work is that it is dependent on the already collected crowd-sourced data.\", \"minor_but_over_stating_claims\": \"Crowd-source data is human collected (and hence not human free).\\n\\nTo ensure that the approach is useful, an evaluation in a new domain would truly test the system not an independent similar crowd-sourced benchmark. As such, I will stay with my review and ratings but curious to see how the other reviewers might respond. \\n\\nThe human annotation study points to prior work but I strongly think that the work will benefit from a small study to see if the final benchmark created is agreeable to the application (and if truly configurable) to a broader use case.\"}", "{\"title\": \"Official Response by Authors\", \"comment\": \"We thank the reviewer for their feedback. Below, we address their comments.\\n\\n> Given that the approach leverages use of ChatBotArena supplied queries and even though the quality filter will remove the specific poor quality queries, it is not free from the input i.e., humans prompting the different LLMs on the arena and easily being configured to a use case that end users may have in mind. Discussing results on adapting the evaluation framework to beyond what is available in Chat bot Arena would be needed to support the claims of the paper. Also it would be good to discuss potential biases introduced by using ChatBotArena queries as a starting point. The paper could be strengthened by providing concrete examples or experiments showing how their approach could be adapted to different domains or use cases beyond ChatBot Arena data\", \"we_apologize_if_we_were_unclear_in_our_paper\": \"Bench-O-Matic is a data curation pipeline which can identify and select high quality prompts from any large pool of data, not just Chatbot Arena. In Section 6.4, we applied our pipeline to Wild-Chat-1M, which is a dataset consisting of diverse, real-world conversations and is not sourced from Chatbot Arena. We demonstrated that the benchmark curated from Wild-Chat-1M by our pipeline also improved significantly in quality (Line 343). This strengthens the generalizability of our pipeline.\\n\\n> An additional area of concern is that almost every step of the pipeline involves a prompt engineering exercise including scoring the final models on a scale of 1-5. This is standard but the question emerges on the fidelity of the LLMs themselves and when they hallucinate themselves. As evidenced by the score sorted by topic cluster, the data does show that for exact answer situations like Python game coding versus loose open ended questions the LLM-judges are not very good. To strengthen the paper - discuss potential failure modes or biases introduced by relying heavily on LLMs, provide more detailed analysis of how performance varies across different types of questions or topics and suggest ways to mitigate or detect potential hallucinations or errors introduced by LLMs in the pipeline\\n\\nWe afraid there may be potential misunderstanding. First, we would like to clarify the interpretation of the \\u201cscore sorted by topic cluster\\u201d in Figure 4 as brought up by the reviewer (Line 303). The score for each topic cluster represents the average number of key qualities satisfied by the prompts within the topic cluster. A higher cluster score implies a higher number of key qualities are being satisfied by the prompts within the topic cluster. The score is unrelated to the qualities of the LLM judges. We apologize for the confusion and will make sure to add more descriptions to the figure and improve clarity in the final version of our paper.\\n\\nAs reviewer mentioned, our pipeline employs prompt engineering LLM within our pipeline, prompt selection and model evaluation. In our paper, we validated the fidelity of these components, see below:\\n\\n**Prompt Selection**: Our data curation pipeline assigns defined qualities to each user prompt in the dataset. For each prompt, we employ GPT-4-Turbo as a judge to determine whether it satisfies each defined quality, generating a quality score based on the number of qualities met. To ensure reliability, we validated GPT-4-Turbo's performance as a judge in our experiment, which demonstrated an 85.6% labeling accuracy (Line 246).\\n\\n**Model Evaluation**: We employ LLM judges for evaluating models, using the produced model scores to construct a leaderboard. We proposed metrics to measure agreement and correlation to human preference ranking in section 3. Using these metrics, we validated our approach using LLM judgment for model evaluation by demonstrating our benchmark having a strong correlation and agreement with human preference ranking (Line 334).\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"As we approach the end of the discussion period tonight, we want to ensure that our response has adequately addressed all of your concerns. Please advise if further clarification is needed or if there are additional questions. We are keen to address any remaining issues and hope you might reconsider your rating based on the information provided.\\n\\nThank you for your time and consideration.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Sincere Request for Review of Our Responses\", \"comment\": \"As we approach the end of the discussion period, we want to ensure that our response has adequately addressed all of your concerns. Please advise if further clarification is needed or if there are additional questions. We are keen to address any remaining issues and hope you might reconsider your rating based on the information provided.\\n\\nThank you for your time and consideration.\\n\\nBest Regards, \\n\\nAuthors\"}", "{\"title\": \"Official Response by Authors\", \"comment\": \"### Reviewer's Questions\\n\\n> Is the approach really adaptable/configurable ? Restate the claims if not.\\n\\nAs shown in Section 6, our approach is indeed configurable. Bench-O-Matic has three configurable components: an initial dataset, an LLM annotator for prompt selection, and an LLM judge for model evaluation. For the initial dataset, we verified Bench-O-Matic\\u2019s effectiveness on two independent datasets, Chatbot Arena and Wild-Chat-1M (Section 6.4). For prompt selection, we experimented with two different LLM judges for annotation, GPT-4-Turbo and Llama-3-70B-Instruct, and demonstrated both produced benchmarks with significantly improved quality (Line 455). For model evaluation, we experimented with four different LLM judges, GPT-4-Turbo, Claude-3-Opus, Gemini-1.5-Pro, and Llama-3-70B-Instruct, and reported their respective agreement and correlation to human preference in Table 4 (Line 393). \\n\\n> Can the approach work irrespective of humans in the loop ? i.e., crowd-sourcer providing initial prompts.\\n\\nOur approach automates the process of curating high quality prompts from crowdsourced datasets, which otherwise would require expensive manual prompt selection. Notably, our approach is not limited to Chatbot Arena, but generalizable to any crowdsourced datasets. As detailed in Section 6.4, we demonstrated our generalizability by applying our pipeline on Wild-Chat-1M, which is an independently crowdsourced dataset.\\n\\n> \\u200b\\u200bhuman Annotation study; How many human annotators were involved? What was the inter-annotator agreement? How were discrepancies between annotators resolved? Were the human annotators experts in any particular domains?\\n\\nWe are concerned there may be potential misunderstandings. In Section 3, we proposed metrics to measure how well a benchmark aligns with human preferences. Our metrics demonstrate that the curated benchmarks strongly agree and correlate with human preference leaderboards (as detailed in Table 1, Line 334). This implies that when evaluating a model, our benchmark can accurately predict its performance on real-world preference tasks, which is the primary goal of an effective benchmark. These leaderboards are based on millions of real-world user preferences across a wide range of tasks and domains. Furthermore, according to the Chatbot Arena paper's user preference analysis, there is a high agreement rate (83%) between user votes and expert assessments. For detailed information about the leaderboard validity and voter analysis, please refer to the Chatbot Arena paper [1].\\n\\n**Reference**\\n\\n[1] Chiang et al. \\u201cChatbot arena: An open platform for evaluating llms by human preference\\u201d, ICML 2024\\n\\n---\\n\\nWe thank the reviewer for their invaluable feedback, and we sincerely hope that the reviewer would reconsider their rating in light of all our points listed in the rebuttal.\"}", "{\"comment\": \"Thanks for weighing in.\\n\\nIs Data curation just chosing a few prompts from the wildchat dataset ? I am trying to triangulate the impact. \\n\\nIn terms of generalization and configurability - how would we adopt this approach to \\\"curate\\\" a benchmark for a 1) scientific agent or 2) answering questions about molecules/proteins and 3) for code-generation tasks ? These are just examples.\", \"i_am_fairly_gracious_with_my_ratings_and_will_stick_to_them_irrespective_given_the_contributions\": \"As pointed in one of the review above \\\".... It is a well-executed and solid paper, though not necessarily groundbreaking\\\"; I concur.\"}", "{\"title\": \"Official Response from the Authors [2/2]\", \"comment\": \"**Eval-O-Matic**\\n\\nLastly, we expect the benchmark for release to be very useful for the community. From our results provided in Section 6, we demonstrate that the curated benchmarks strongly agree and correlate with human preference leaderboards (as detailed in Table 1, Line 334). This implies that when evaluating a model, our benchmark can accurately predict its performance on real-world preference tasks, which is the primary goal of an effective benchmark, all at a cost of $20. \\n\\n---\\n\\nIn summary, we appreciate the reviewers' constructive feedback and take this opportunity to reinforce our contribution to addressing the key challenges in the current landscape of LLM evaluation.\\n\\nWe sincerely hope Reviewer 5rNy would reconsider their rating in light of all our points listed in the rebuttal.\"}", "{\"summary\": \"The paper proposes approaches to automate the benchmark generation process via the prompting of LLMs. The proposals for different characteristics to establish the baselines are fair and the contributions are around the different scoring mechanisms to 1) evaluate the quality of prompts 2) LLM-based judging of prompt outputs to generate 1-5 score instead of binary preferences and 3) combining them with statistical aggregators to differentiate end evaluate different LLM outputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The promise of the paper is excellent if delivered -- Reconfigurable automated benchmarks without humans in the loop and via crowd sourced data. With a series of prompting techniques in the pipeline, the approach is fair and well studied. Key innovations are in the design of metrics to separate various models and crux of thesis on generating evaluation data that is of high quality and can be separable.\", \"weaknesses\": \"The key weaknesses around this paper are the claims that the proposed approach is human-free and easily configurable as shown in the Table comparing the multiple methods. Given that the approach leverages use of ChatBotArena supplied queries and even though the quality filter will remove the specific poor quality queries, it is not free from the input i.e., humans prompting the different LLMs on the arena and easily being configured to a use case that end users may have in mind. Discussing results on adapting the evaluation framework to beyond what is available in Chat bot Arena would be needed to support the claims of the paper. Also it would be good to discuss potential biases introduced by using ChatBotArena queries as a starting point. The paper could be strengthened by providing concrete examples or experiments showing how their approach could be adapted to different domains or use cases beyond ChatBot Arena data\\n\\n\\n\\nAn additional area of concern is that almost every step of the pipeline involves a prompt engineering exercise including scoring the final models on a scale of 1-5. This is standard but the question emerges on the fidelity of the LLMs themselves and when they hallucinate themselves. As evidenced by the score sorted by topic cluster, the data does show that for exact answer situations like Python game coding versus loose open ended questions the LLM-judges are not very good. To strengthen the paper - discuss potential failure modes or biases introduced by relying heavily on LLMs, provide more detailed analysis of how performance varies across different types of questions or topics and suggest ways to mitigate or detect potential hallucinations or errors introduced by LLMs in the pipeline\\n\\n\\nThe details of human annotation were very unclear. See questions below.\", \"questions\": \"- Is the approach really adaptable/configurable ? Restate the claims if not.\\n- Can the approach work irrespective of humans in the loop ? i.e., crowd-sourcer providing initial prompts. \\n- human Annotation study; \\n How many human annotators were involved?\\n What was the inter-annotator agreement?\\n How were discrepancies between annotators resolved?\\n Were the human annotators experts in any particular domains?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response from Authors\", \"comment\": \"We thank the reviewer for their encouraging feedbacks. We appreciate that the reviewer found our paper sound, well-presented, and relevant to current AI challenges. Below, we address the reviewer's comments and questions.\\n\\n**Reviewer\\u2019s Comments**\\n\\nWe appreciate the reviewer for pointing out all the minor writing corrections. We will make sure to correct them in the final version of our paper.\\n\\n**Reviewer\\u2019s Questions**\\n> Previous studies have shown that fine-tuning the LLM-as-a-Judge can significantly improve evaluation robustness. Has this been considered in the current work? This could help improve the quality of the judges, the main limitation of this benchmark.\\n\\nWe appreciate the reviewer's suggestion regarding the fine-tuning of LLM-as-a-Judge. While we haven't yet tested a fine-tuned version on our benchmark, we plan to conduct a comparative study that will evaluate the performance of both fine-tuned and non-fine-tuned LLM judges in the future.\\n\\n> In Section 4.2, it states, \\\"We also ensure the final dataset is free from personally identifiable information or offensive content.\\\" Could the authors elaborate on how this is achieved? Was this done manually or automatically with the help of an LLM?\\n\\nWe verified the final dataset is free from Personally Identifiable information by using Azure PII detection tool. We also used OpenAI moderation API to flagged and remove any prompts with offensive contents. We thank the reviewer for pointing this out and will make sure to detail these steps in the final version of our paper.\\n\\n---\\n\\nWe thank the reviewer for their invaluable feedback and hope we have addressed their questions.\"}", "{\"comment\": \"Thanks. I read all the reviews and rebuttal and will keep my original ratings with a change in the soundness score. I plan to take time off to enjoy the Thanksgiving break in the US and wish you the same if you are celebrating. Good luck.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Response from Authors\", \"comment\": \"We thank the reviewer for their feedback. Below, we address their comments.\\n\\n> The seven quality criteria may not fully encompass the diversity of user tasks, potentially favoring specific types of prompts over others.\\n\\nAs mentioned on line 504, we agree with the reviewer that the seven defined qualities may not fully capture the range of all possible tasks. While these qualities primarily focus on identifying more challenging prompts\\u2014particularly those involving problem-solving\\u2014we emphasize that Eval-O-Matic demonstrates strong correlation with human preference leaderboards, which are based on millions of real-world user preferences (as shown in Table 5, Line 334). Our pipeline, Bench-O-Matic, is designed to curate prompts from vast datasets based on defined qualities, and is not limited to the seven we identified. Developers can customize their own criteria for desired prompt qualities, such as whether a prompt effectively assesses reasoning abilities. \\n\\n> The synthesis of data is reliant on on LLMs as Judges. The LLM-as-a-Judge framework may introduce stylistic or self-bias, even with adjustments, which could influence benchmark objectivity in certain cases.\\n\\nWe agree with the reviewer that LLM-as-a-Judge framework may introduce biases (Line 462). In section 4.2, we validated our LLM judges for prompt selection (see \\u201cData Filtering\\u201d below). In section 6.5 and 6.6, we addressed and proposed solutions to mitigate stylistic biases and self-biases when using LLM-as-a-Judge for model evaluation (see \\u201cStylistic Bias\\u201d and \\u201cSelf-Bias\\u201d section below).\\n\\n**Data Filtering**\\n\\nOur data curation pipeline assigns defined qualities to each user prompt in the dataset. For each prompt, we employed GPT-4-Turbo as a judge to determine whether it satisfies each defined quality, generating a quality score based on the number of qualities met. To ensure reliability, we validated GPT-4-Turbo's performance as a judge in our experiment, which demonstrated an 85.6% labeling accuracy (Line 246).\\n\\n**Stylistic Bias**\\n\\nWe agree with the reviewer that LLM-as-a-Judge may introduce stylistic biases (Line 480). To address this, Section 6.5 presents a standard statistical technique to decouple style and substance in the leaderboard. By isolating stylistic influence from final model scores, the style-controlled benchmark reflects model strength agnostic of style. Notably, this approach removes stylistic confounders rather than arbitrarily adjusting for them. Detailed methodology is provided in the appendix (Page 17). \\n\\nFurthermore, as detailed in line 441 of Section 6.5, we conducted experiments demonstrating that our style-controlled benchmark cannot be manipulated through stylistic factors\\u2014addressing the primary concern regarding these biases potentially exploit LLM judge preferences. While our unmodified benchmark may favor models that provide more detailed responses, our style-controlled scores show no preference toward responses with enhanced styles or longer length over the original model's output (Table 5; Line 441). This suggests stylistic biases are effectively mitigated in our benchmark.\\n\\n**Self-Bias**\\n\\nWe agree with the reviewer that LLM-as-a-Judge may also introduce self-biases (Line 488). In section 6.6, we proposed Ensemble LLM judges to mitigate self-biases. In our experiment, we observed that combining GPT-4-Turbo and Gemini-1.5-Pro judgments reduces self-biases exhibited in the benchmark using GPT-4-Turbo as a single judge (Line 497). Furthermore, our results showed that our ensemble method achieves higher agreement and correlation to human preferences than the single judge approach (Line 394). This suggests self-biases are mitigated in our benchmark.\\n\\nLastly, we note that our benchmark achieves strong agreement and correlation with human preference, thereby validating our benchmark's quality and usefulness to the community (Table 1; Table 3).\"}", "{\"summary\": \"The work proposes Bench-O-Matic, a system for automatically curating high-quality, open-ended LLM benchmarks by using large-scale, crowd-sourced data. This tool addresses the need for evolving benchmarks that adapt to the rapid development of LLMs without requiring human intervention.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Bench-O-Matic efficiently creates high-quality benchmarks from crowd-sourced data without human input, the work addressed the scalability issue in benchmark curation.\", \"The work Introduces novel metrics like Separability with Confidence and Pair Rank Brier Score, enhancing the robustness and reliability of benchmark assessments.\", \"Eval-O-Matic achieves strong performance alignment with human preferences for only $20 per evaluation, and provides a cost-effective alternative to static benchmarks.\"], \"weaknesses\": [\"Quality insurance. The seven quality criteria may not fully encompass the diversity of user tasks, potentially favoring specific types of prompts over others.\", \"The synthesis of data is reliant on on LLMs as Judges. The LLM-as-a-Judge framework may introduce stylistic or self-bias, even with adjustments, which could influence benchmark objectivity in certain cases.\"], \"questions\": \"Is there any estimation on the error/quality of the data generated? Or using some metrics to evaluate the similarity of the generated data with the real-world data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an automated pipeline, Bench-O-Matic, designed to curate prompts and create benchmarks for evaluating large language models (LLMs). The authors propose new metrics to assess benchmark quality, ensuring a clear separability of confidence scores and alignment with human preferences. The prompts are organized into topic clusters to ensure diversity, and an \\\"LLM-as-a-Judge\\\" approach is used to evaluate responses from various LLMs, fully automating the evaluation process. Additionally, the paper presents two novel benchmarks generated using this pipeline: Eval-O-Matic, based on Chatbot Arena, and Wild-O-Matic, derived from WildChat-1M.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The problem statement is clearly defined.\", \"The paper addresses a significant challenge highly relevant to the current state of AI and places well in the current literature.\", \"The pipeline is flexible and open-ended, allowing for continuous improvements over time.\", \"The experiments are comprehensive, demonstrating that the pipeline effectively creates benchmarks based on the metrics defined in the paper, with multiple LLMs evaluated on Eval-O-Matic.\", \"The paper presents new ideas to evaluate benchmarks to overcome previous issues.\"], \"weaknesses\": [\"Using an LLM to evaluate other LLMs\\u2019 responses may limit the complexity of the benchmark prompts. While employing an ensemble of judges partially mitigates this issue, there is still an inherent limitation. However, the advantages of an automated pipeline outweigh this concern, and the authors have implemented techniques to reduce evaluation biases.\", \"I have a hard time finding weaknesses for the paper. It is a well-executed and solid paper, though not necessarily groundbreaking.\", \"**Minor Comments**\", \"On line 80, \\\"achieve 98.6% correlation\\\" should be \\\"achieve**s** 98.6% correlation\\\".\", \"On line 82, \\\"Our work**s** makes\\\" should be \\\"Our work makes\\\".\", \"On lines 206 and 352, \\\"Section C\\\" should probably be changed for \\\"Appendix C\\\" for clarity.\", \"On line 464, \\\"an regression based approach\\\" should be corrected to \\\"**a** regression-based approach.\\\"\"], \"questions\": [\"Previous studies have shown that fine-tuning the LLM-as-a-Judge can significantly improve evaluation robustness. Has this been considered in the current work? This could help improve the quality of the judges, the main limitation of this benchmark.\", \"In Section 4.2, it states, \\\"We also ensure the final dataset is free from personally identifiable information or offensive content.\\\" Could the authors elaborate on how this is achieved? Was this done manually or automatically with the help of an LLM?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I respectfully disagree with Reviewer 5rNy's interpretation. In my understanding, the paper clearly presents Bench-O-Matic as a *data curation pipeline*. It does not claim that the initial data collection process is human-free, but rather that the curation process itself is automated\\u2014a distinction that is emphasized throughout the paper. For example, the word 'curation' is used multiple times in both the abstract (5 occurrences) and the introduction (12 occurrences), highlighting the paper's primary focus. If there are specific parts of the text that gave the impression the data collection itself is human-free, I would appreciate it if you could point them out for further discussion. Personally, I did not interpret it that way.\\n\\nBeyond this clarification, I find that the claims made in the paper are appropriately framed and well-supported by the evidence provided, even before the authors' rebuttal. Many of the points raised in the questions and weaknesses could be addressed directly by referring to the content already present in the paper. While I initially refrained from addressing these points to allow the authors to respond, I felt the presentation was sufficiently clear from the outset, and the rebuttal only strengthened this impression.\\n\\nI agree with Reviewer 5rNy that the method relies heavily on prompt engineering, but this is both a standard practice and an unavoidable aspect of this type of research. Importantly, the benchmarks created through this process show strong agreement with human preferences, which, in my view, mitigates this concern.\\n\\nI believe that the misunderstandings may come from a misreading or an incomplete interpretation of the paper\\u2019s focus, rather than from overstated claims by the authors. The distinction between data collection and curation, as well as the evidence supporting the pipeline's configurability, is well-documented in the text. I hope this perspective helps clarify the authors\\u2019 intent and highlights the contributions of this work.\\n\\nRegarding the comment on the human annotation study and the need for a broader evaluation, could you clarify how this ties into the scope of the paper? From my perspective, this work is explicitly focused on curating datasets rather than collecting data. Additionally, I find the evidence for the pipeline\\u2019s configurability\\u2014such as its successful application to both Chatbot Arena and Wild-Chat-1M datasets\\u2014compelling. Using different LLMs for evaluating the prompt and scoring the models also seems relatively simple. Are there specific aspects that led you to question its configurability? I believe this clarification could help align our interpretations.\"}", "{\"metareview\": \"The paper presents an automated pipeline, Bench-O-Matic, which curates prompts and creates benchmarks from large, crowd-sourced datasets for evaluating LLMs. Additionally, new metrics are proposed to assess benchmark quality. Experiments demonstrate that its performance aligns with human preferences. However, the reliability of the new benchmark relies heavily on LLMs\\u2019 judgments, and the current version does not address the generalizability of the pipeline. While I acknowledge that the pipeline can be applied to other data sources, generalizability is crucial if this work focuses on LLM evaluation.\", \"additional_comments_on_reviewer_discussion\": \"Discussion Summary:\\n\\n1. Clarification on LLMs\\u2019 Bias: The authors use an ensemble of judges to mitigate LLMs\\u2019 bias is partially effective. This approach is acceptable, as it is commonly adopted in similar works.\\n2. Generalization Ability of Bench-O-Matic: The authors claim that the pipeline can be applied to other data sources. However, throughout the rebuttal session, no new experiments were presented to support this claim.\"}", "{\"title\": \"Official Response from Authors [1/2]\", \"comment\": \"We thank the reviewers for their additional comments. We would like to address specific comments from the reviewers first, then summarize key contributions of our work.\\n\\n> If the curated dataset includes questions related to other domains like scientific agents, molecules/proteins, or code generation, the pipeline should adapt and create the appropriate topic clusters, provided the necessary prompts exist. Maybe the authors could expand on this?\\n\\nReviewer 3LVj is correct in noting that our pipeline effectively adapts to create appropriate topic clusters in the source dataset. For example, our pipeline identifies over 4,000 distinct topics from the original dataset from Chatbot Arena across a wide range of domains, such as \\u201cProfessional Email Communication,\\u201d \\u201cPyTorch Autoencoder Implementation,\\u201d and \\u201cBaking and Peanut Butter Recipes.\\u201d Then the pipeline evaluates these topic clusters and prompts, selecting only the highest quality clusters and prompts based on the desired specifications. As a result, the final curated benchmark reflects topics that naturally occur in real-world user scenarios while aligning with the specific requirements set by the pipeline. For example, the final curated benchmark, Eval-O-Matic, contains prompts from topic clusters such as \\u201cAdvanced Algebra and Number Theory\\u201d, \\u201cPyTorch Autoencoder Implementation\\u201d, and \\u201cChess Strategy and Gameplay\\u201d. Figure 6 shows a plot with topic clusters and their quality scores assigned by Bench-O-Matic (Page 22). \\n\\n>In terms of generalization and configurability - how would we adopt this approach to \\\"curate\\\" a benchmark for a 1) scientific agent or 2) answering questions about molecules/proteins and 3) for code-generation tasks ? These are just examples.\\n\\nWe understand Reviewer 5rNy's comment regarding Bench-O-Matic does not produce a benchmark tailored to specific tasks. While we agree that extending the pipeline to curate targeted benchmarks is a promising direction for future work, we believe our approach effectively addresses the current concerns of model developers. Most developers are interested in how well their trained models perform on real-world user queries and the overall user experience post-deployment. This necessitates evaluating model performance on queries sourced from actual user interactions and naturally occurring topic clusters. For example, tasks related to \\\"code-generation\\\" are indeed frequently asked by real users in our crowdsourced datasets, and the final curated benchmarks reflect this and contain topic clusters such as \\\"PyTorch Autoencoder Implementation\\\" and \\\"Web Development & APIs\\\". By focusing on curating benchmarks from crowdsourced data, our pipeline ensures that the resulting benchmarks best reflect the types of questions real users are likely to ask, thus providing a more accurate assessment of a model's real-world performance.\\n\\n## Key Contributions\\n\\n**Bench-O-Matic**\\n\\nOur pipeline offers a solution to key challenges in the current landscape of LLM evaluation. Traditional static benchmarks often struggle to effectively differentiate state-of-the-art models, fail to align closely with real human preferences, and suffer from issues such as performance saturation and susceptibility to test-set leakage. However, creating new, high-quality benchmarks typically requires extensive manual curation, incurring significant labor costs. Consequently, there is a growing demand among model developers for benchmarks that can:\\n1. Effectively differentiate state-of-the-art models\\n2. Align to real human preferences and evaluate models on real-world user tasks\\n3. A cost-effective pipeline which can constantly curate new, high quality benchmarks to avoid saturation and test-set leakage\\n\\nTo address the first two points, we want to first highlight our novel evaluation metrics: Separability with Confidence Intervals, Agreement with Human Preference, and Brier Score. These metrics provide robust tools to quantify a benchmark\\u2019s effectiveness on qualities that matter most to model developers. We believe these metrics empower developers to make informed decisions about which benchmarks best suit their needs. Subsequently, we showed the benchmarks we curated to be effective at differentiating model performances and well-align to human preferences.\\n\\nTo address the third point, our pipeline demonstrated remarkable cost efficiency, processing 200,000 prompts for just \\\\\\\\$45 (Line 259). In contrast, GPQA, which comprised 500 multiple-choice questions, incurred a cost exceeding \\\\\\\\$120,000 (Line 37). This affordability enables Bench-O-Matic to be applied continuously to new datasets, allowing for the curation of high-quality benchmarks on demand. By doing so, it mitigates the risks of saturation and test-set leakage, providing model developers with access to fresh, dynamically curated benchmarks for testing their models.\"}" ] }
590yfqz1LE
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models
[ "Michael Aerni", "Javier Rando", "Edoardo Debenedetti", "Nicholas Carlini", "Daphne Ippolito", "Florian Tramèr" ]
Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses—even for benign interactions.
[ "large language models", "memorization", "data extraction", "originality", "privacy" ]
Accept (Poster)
https://openreview.net/pdf?id=590yfqz1LE
https://openreview.net/forum?id=590yfqz1LE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xtVaEwKTpH", "wyE9sF4GRV", "vsZtaxyMP9", "uWorMauLkq", "tQBI8wDogF", "pTrj4QEfNy", "nSuJqVpxV7", "mIx94xu4dg", "loDvIlQR0u", "hN9YUQLUQY", "gA8IAa5bzl", "fls7dAqc2q", "fVjFNVFo4L", "anMARt69lo", "XjG1OmrFGW", "XSTBnRu3Av", "V14q5ujZYh", "TrE5pLFb1Q", "Sf2cDlLVM3", "MbiAfGkYcs", "LBuTkeYE0Y", "GLcn0XfcqS", "FBYSH0l1bg", "DEL7IH5nPF", "AcxIPI9sCG", "Aai106DwCf", "92mMsRgKFf", "5Worf8kV0t", "5S6WnQlXgD", "3cf5T4sXfl", "2vlVq2yUMH", "1WpFBaW1iM", "18jgVfaAYF" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1733126509516, 1732712690512, 1732688339148, 1730677372157, 1730736940270, 1732658614876, 1732550145060, 1730722985106, 1732189349318, 1730670538992, 1732611775096, 1732189689937, 1733216780504, 1730695826113, 1732189743469, 1732565909777, 1733217822597, 1732190426010, 1732189871282, 1730569605975, 1732189212756, 1732264473404, 1732190574538, 1730715587091, 1734406537125, 1732190667023, 1731312373873, 1732645143717, 1733211309188, 1732257243175, 1732642416790, 1732190213755, 1737524144003 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_Z9TT" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_VqkD" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_6enV" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_5QLf" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_7CNj" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_6ufj" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_6ufj" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_Z9TT" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_TiF9" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_6enV" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_7CNj" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_VqkD" ], [ "ICLR.cc/2025/Conference/Submission11752/Area_Chair_wkLq" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_sm5N" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_sm5N" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_TiF9" ], [ "ICLR.cc/2025/Conference/Submission11752/Reviewer_5QLf" ], [ "ICLR.cc/2025/Conference/Submission11752/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Reply to Author Rebuttal\", \"comment\": \"Thank you for the author's response. I would like to reply as follows:\\n\\n1. **Motivation**: I have carefully read the author's rebuttal, the opinions of other reviewers, and some relevant existing works. In summary, I partially retract my initial judgment and acknowledge the frontier and contribution value of the conclusion presented in the paper within the research community. However, I still have doubts about the author's explanation: for LLMs, the pretraining task of memorizing text through next-token prediction is part of the training process. For a given prefix, LLMs tend to favor tokens with better probability fits during sampling, due to the nature of joint prediction probabilities being accumulated token by token. Therefore, I still believe that LLMs are more likely to align with continuous segments that were maximized in probability during the training phase (i.e., internet text as the training corpus). This is not quite analogous to human examples, which are more aligned with a memorization process rather than regular reading, and there has not been extensive research showing that the memory logic between human reading and explicit memorization is consistent. Of course, this conclusion has not been put forward by previous work, so I acknowledge its value to the community.\\n\\n2. Regarding the response to other issues, I have no further questions.\\n\\nGiven my response in point 1, I am more inclined to increase my score to 5, but not higher. However, I have adjusted my confidence accordingly, to allow the AC and other reviewers to play a more important role in the decision-making process.\"}", "{\"title\": \"Thank you and follow-up\", \"comment\": \"We thank the reviewer for their follow-up and raising their score.\\n\\n> I wonder whether these could be retrieved with the implementation from Nasr et al., 2023 by iteratively searching for all characters that could expand the current match.\\n\\nThis should indeed be possible, but is restricted to suffix context of a snippet. That is, since AuxDataset is implemented as a suffix array, we could efficiently expand a snippet to obtain a context that follows that snippet in AuxDataset. However, obtaining prefix context could be more expensive, since that likely requires a brute-force approach.\"}", "{\"comment\": \"Thank you for the response - it answered most of my questions, and I will raise my score.\", \"re\": \"original context of reproductions - I wonder whether these could be retrieved with the implementation from Nasr et al., 2023 by iteratively searching for all characters that could expand the current match.\"}", "{\"summary\": \"This paper investigates non-adversarial reproduction in Large Language Models, where models unintentionally reproduce strings of their training data in response to benign prompts. By analyzing various tasks including creative writing and expository writing, the authors found that 10%-15% of LLM-generated content overlaps with internet text, significantly more than human-generated text. The study shows that expository tasks are especially prone to this phenomenon. Although specific prompting strategies can reduce the frequency of reproduced content, they do not fully prevent long sequences from appearing, indicating a need for stronger measures to mitigate unintended data leakage.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper studies a very interesting question about quantifying non-adversarial reproduction in LLMs, which is practical in using LLMs.\\n\\n2. The analysis of the question is comprehensive, containing the conclusion for different tasks. The study provides a good understanding of how and when LLMs are more likely to reproduce training data.\\n\\n3. The exploration of prompting as a mitigation strategy gives good insights, showing both its potential and limitations.\", \"weaknesses\": \"1. The evaluation results might be biased because the authors cannot access the real training dataset of evaluated LLMs.\\n\\n2. Some points in the paper are not very clear. For example, for the prompt mitigating part, the authors do not demonstrate which dataset are they using. And since they can use WildChat or LMSYS-Chat-1M, what is the motivation for collecting a new dataset?\\n\\n3. The length of substrings that are used to calculate the overlap rate is strange. This paper considers a substring of 50 words, which is 'shorter than the 50 token (150\\u2013200 characters) threshold used in previous studies on adversarial extraction'. However, the authors do not provide a decent reason for using 50 words. The authors also mention that a substring of 50 words could be both common or unique sentences. However, I do not think a common sentence or phrase should be considered as a leakage of training data. Using such a standard could make the evaluation results further biased.\", \"questions\": \"1. Why do the authors not consider open-source LLMs where they can know which datasets are used for training? For example, in the Membership Inference Attack area of LLMs, researchers usually use Pythia and the Pile dataset.\\n\\n2. Why do the authors collect their own dataset instead of WildChat and LMSYS-Chat-1M? What is the unique advantage of the new dataset?\\n\\n3. Why do the authors consider substrings of 50 words? How will the results change if changing the threshold?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper tackles the problem of mitigating non-adversarial training data reproduction (natural and benign prompts from users revealing training data verbatim) in LLMs. The experiments find that in some cases as much as 15% of text output by LLMs overlaps with moderate snippets (40-60 characters) of the Internet. A comparative analysis finds that human-written text has far less overlap compared to LLM generated text. The paper proposes prompting strategies to close this human and LLM gap. Though these strategies close the gap between LLMs and humans on average, the paper suggests that worst-case reproduction might need stronger adversarial defenses.\\n\\nThe classes of tasks chosen for LLM text generation in this paper can be broadly classified into *creative writing*, *expository writing*, and *argumentative writing*. Since training data information is not available for certain models, the training data is approximated by collecting a large dataset of Web content (AUXDATASET).\\n\\nThe primary metric used is overlap rate (the percentage of characters in each generation that belong to a substring of at least 50-consecutive characters found exactly in AUXDATASET).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is an extensive analysis of situations where LLMs generate text from training data verbatim, even when not explicitly prompted to reveal such information. The later case has been seen in adversarial attacks against LLMs in recent research. So, the results from this study can be used to inform us about scenarios where data leakage happens without explicit adversarial effort.\", \"weaknesses\": \"The current text is ordered as a set of subsections with a verbatim description of experimental steps. The presentation lacks focus on the main contributions of this research. For example, if this is the case, it should probably be highlighted that LLM data leakage studies for benign prompts haven't been looked at. Furthermore, instead of presenting all the results (as in Section 3) as small headings and text, it would help to have an additional small section which highlights the most important contributions which readers can take away from the paper.\\n\\nI have some concerns about the collection of human data regarding plagiarism and contamination. Please refer to the Questions section.\", \"questions\": \"1. (Line 162 - Question about human-written baseline) Even with this measure of choosing content after LLM cut-off date and content which is not part of AUXDATASET, how is it confirmed that the content taken from Reddit is not LLM generated? Is it not possible that an LLM might have been used to generate it?\\n\\n2. (Line 321) The paper mentions, \\u201cWe find that LLMs reproduce more existing data than humans, except when humans do blatant plagiarism.\\u201d I might have missed it in the text, but it would be great to have some clarification regarding how this is controlled? For example, given a prompt like \\u201cWrite a tutorial about setting up an Nginx server.\\u201d, humans might be prone to copy data verbatim from top-ranked articles on a search engine. There is a discussion in Line 404 about IMDb reviews, but what measures were taken for Reddit content?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response, I'll maintain my already positive score\"}", "{\"comment\": \"The authors' responses are acknowledged. Thank you for clarifying the details further.\\n\\nI'll stick with my already positive score.\"}", "{\"summary\": \"The paper shows how language models can reproduce training data even with 'non-adversarial' prompts. While LLMs have been previously shown to reproduce training data, these experiments were conducted with adversarial assumptions, and the prompts used can be considered a violation of user policy by many LLM developers. The authors argue that even under the assumption of non-adversarial prompts, i.e., everyday use prompts that are not targeted at extracting training data, one can see LLMs regurgitating their training data. The authors provide a wide range of experiments on many different SOTA conversational LLMs and with many different categories of prompts to support their hypothesis.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Incredibly relevant work. While the pessimist in me believes that LLM developers will always find new excuses to argue why LLMs regurgitating sensitive or proprietary data is not their responsibility, it is important to try and keep holding them accountable. In the context of adversarial reproduction of training data not being \\\"typical or allowed user activity\\\", this work plays an important role in highlighting how even everyday use of LLMs can reproduce training data.\", \"Wide range of experiments, both in terms of different models, as well as verifying various hypotheses. Lots of interesting insights.\", \"Qualitative analysis and handpicked examples. I was happy to see some qualitative analysis by the authors, especially of the long tail.\"], \"weaknesses\": [\"The use of a 50-character limit for overlap rate. I'm not convinced that the 50-character limit is strong enough to cause issues for LLMs reproducing training data. I'm not familiar with legal precedence on reproducing text without attribution; but at least when quoting from other sources, the limits are usually looser - even the strictest being around 25-50 words and usually, it is a few hundred words (https://ogc.harvard.edu/pages/copyright-and-fair-use, https://stevelaube.com/how-much-can-i-quote-from-another-source-without-permission/). Although, it should be mentioned that the authors are very open about their overall results and also discuss the long-tailed part of the reproduction, which highlights some actual issues. But despite this, their main results and trends are focused on an overlap rate defined with a 50-character limit.\", \"Lack of details on additional prompts used in the experiments. The authors have created some manual human-written prompts, which are used alongside data scraped from Reddit, in their experiments. I understand that releasing all these prompts during the reviewing phase might not be practical, and I appreciate the authors mentioning that they will release them in a later version of the paper, but I would like to see some details in the paper to perform a proper review of their work. More details on this in the questions below.\"], \"questions\": [\"Can the authors reason why the 50-character limit beyond simply the argument that non-adversarial prompts reproduce less training data? The qualitative analysis of those 50-character snippets is appreciated, but as the authors showed, many of them are common phrases that might not constitute problematic behaviour from LLMs.\", \"Can the authors provide more details on how their manual prompts were created? Were they crowdsourced, or written by the authors themselves? Were they sourced from how authors themselves commonly use LLMs, or were they thought up in bulk at once? Were there efforts made to categorize them into a variety of prompts (beyond the three broad categories used in the paper), or maybe efforts made to check this variety after the prompts were created? No answers are bad answers here, even if the prompts were written by the authors in bulk in one sitting to capture the broad categories defined, that's a good start. But in any case, details are needed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their kind words and constructive comments.\\n\\n> Frequency of Reproduced Sequences: The paper could benefit from clarifying how often the reproduced sequences appear within their training data proxy, AUXDATASET. [...]\\n\\nThis would be very interesting. Unfortunately, there is no way to reliably estimate the frequency of snippets in any model\\u2019s training data. First, model providers might perform deduplication of their training data, such that a snippet\\u2019s frequency on the internet might not match its frequency in the training data. Second, AuxDataset combines potentially overlapping sources (e.g., multiple copies of Wikipedia in different datasets). In both instances, we can reliably approximate whether a snippet is part of the data, but not with which frequency.\\n\\n> Justification of 50-Character Threshold: The choice of a 50-character threshold to define reproduced sequences is not fully justified. [...]. Further explanation would help readers assess whether this threshold adequately captures the difference between common phrases and more problematic reproductions.\\n\\nWe hope the explanation in our [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y\\n) and the improved discussion in the paper provide a better justification. We are happy to answer follow-up questions.\\n\\n> Data in Figure 2(b): Figure 2(b) appears to have only partial bar plots for some models (Llama and GPT), making the comparison across models less robust. Or am I missing something here?\\n\\nThis is correct and a consequence of our post-hoc analysis of WildChat and LMSYS-Chat-1M. For example, WildChat consists solely of ChatGPT conversations and thus only has generations for GPT models. This is okay; the goal of Fig. 2(b) is to highlight that non-adversarial reproduction also happens in the wild (outside of our controlled experiments), not to provide a robust comparison between models.\\n\\n> Overall, I am constantly battling between thinking that 50 characters is too less, and then seeing the argument that these reproduction rates are much higher than humans. [...]. Would a human with a passage (RAG style reading comprehension) be a better baseline? [...]\\n\\nWe thank the reviewer for this in-depth thought; however, there are subtleties that break this analogy. For LLMs, we explicitly focus on *copying/reproduction from the training data*, not from the prompt. However, humans copying text from a provided passage corresponds to LLMs copying from their prompt/context. A better analogy are humans who are able to recall and lookup information from memory\\u2014which is exactly what we measure.\"}", "{\"summary\": \"In this paper, the authors investigate the issue of non-adversarial reproduction, which refers to the overlap between LLM outputs in response to natural prompts (mainly writing prompts) and internet text. The authors argue that LLM-generated text carries a higher risk of overlapping with internet text than human-written text. Additionally, the authors explore the preliminary use of prompt design to reduce overlap.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Non-adversarial reproduction is valuable for protecting LLM outputs from risks such as infringement and privacy violations.\\n \\n2. The authors validate the existence of non-adversarial reproduction risks across several mainstream models.\", \"weaknesses\": \"1. The authors\\u2019 conclusion seems somewhat obvious, as LLMs are explicitly fit on internet text. Intuitively, LLMs are more likely to produce text resembling their training corpus than humans. The authors should better articulate the value of their findings.\\n\\n2. Building on the first point, the authors propose using prompt design to mitigate overlap. However, the method and its underlying principles lack significant innovation.\\n\\n3. The authors appear to conflate reproduction of internet text and training data. These are not equivalent, as the training data depends on the model's degree of fit. Especially when using a simulated dataset, this discrepancy may be amplified.\\n\\n4. The task is limited to writing. I suggest the authors consider extending it to other tasks. Generally, open-ended writing tasks are more likely to lead LLMs to recite memorized training data.\", \"questions\": \"1. I suggest that the authors consider using some training data detection methods (e.g., [1]) to assist in identifying training corpus when exploring reproduction of training data.\\n\\n[1] Detecting Pretraining Data from Large Language Models. (ICLR-24)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for considering our reply and for raising their score. We will think of additional ways to clarify our threshold choice and provide more intuition.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful comments.\\n\\n> The current text is ordered as a set of subsections with a verbatim description of experimental steps. The presentation lacks focus on the main contributions of this research.\\n\\nWe have updated the paper to make our main focus more clear; please refer to the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y) for details. We are happy to answer any remaining ambiguities. Finally, we note that each paragraph in Sec. 3 and 4 corresponds to one finding of our study.\\n\\n> Even with this measure of choosing content after LLM cut-off date and content which is not part of AUXDATASET, how is it confirmed that the content taken from Reddit is not LLM generated? Is it not possible that an LLM might have been used to generate it?\\n\\nIn short, even if a few rare instances of human-written baselines contain some LLM-generated text, this does not weaken our comparison (it might even make it stronger). We discuss this more in the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y).\\n\\n> The paper mentions, \\u201cWe find that LLMs reproduce more existing data than humans, except when humans do blatant plagiarism.\\u201d I might have missed it in the text, but it would be great to have some clarification regarding how this is controlled? For example, given a prompt like \\u201cWrite a tutorial about setting up an Nginx server.\\u201d, humans might be prone to copy data verbatim from top-ranked articles on a search engine. There is a discussion in Line 404 about IMDb reviews, but what measures were taken for Reddit content?\\n\\nWe can indeed not rule out plagiarism in human-written responses for the Reddit content (WritingPrompts and ELI5 tasks). However, we did perform a manual investigation (as for IMDb reviews), which did not reveal any significant cases of blatant plagiarism (as opposed to IMDb reviews). This matches our Fig. 5a/c, where human texts contain fewer reproduced snippets of any length compared to LLMs, and Fig. 6 where the mean overlap rate for humans on creative/expository tasks is below 2% (and the median is 0).\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for taking the time to revisit our work and comments.\\n\\nWe agree with the reviewer's intuition about why LLMs are more likely to produce verbatim sequences of training data compared to humans. However, this hypothesis had not been tested by previous work, and we demonstrate this empirically for the first time. We use human baselines mainly to highlight those differences.\"}", "{\"summary\": \"The paper measures character-level verbatim text produced by LLMs (GPT, Claude, and Llama) on benign prompts and conversations, in contrast to adversarial membership inference attacks. The authors find that LLMs indeed reproduce internet text up to 15% of the time (50+ characters), in the worst case regurgitating over 1000 characters. The authors provide a breakdown of severity by task and compare to human baselines. Finally, the authors find that setting a system prompt can reduce this form of text reproduction for shorter sequences.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Significance: This paper is interesting because it quantifies the known phenomenon of LLMs reproducing text from their training data. In contrast to prior work, it attempts to evaluate a natural distribution of reproduced text lengths. The topic is important as LLMs are commonly used as assistants: there is a quantified risk to using LLMs for writing and code generation, as training data reproduction could result in unintentional plagiarism from the end user. This is exemplified by one of the most interesting findings of this paper is that benign prompting can result in reproduced text of 100 characters (2.5% of the time) and 1000 characters (0.01% of the time).\", \"quality\": \"The authors conduct extensive analysis to break down the reproduction by text category and also demonstrate that reproduction rate is higher than human written text, including qualitative analysis and a simple mitigation strategy, with a clear presentation of their findings.\", \"weaknesses\": \"1. 50 character cutoff may overestimate regurgitation: The authors acknowledge this limitation, but it is difficult to differentiate based on character length alone whether the data is truly regurgitated off the internet or just due to it being a common phrase, especially when the length is around 50 characters. Additional analysis to estimate a numerical proportional breakdown between these two categories would make the paper more rigorous. There is far less doubt about text reproduction vs. common phrases past the 100-150 character point.\\n\\n2. AI contaminated human baselines: Since the human baselines were scraped off the internet after the training cutoff of these models, they could already contain AI generated text from a prior generation of LLMs rather than represent a natural distribution of human writing. I would find it interesting if you can also evaluate the reproduction length distribution of human data known to be mostly free from AI contamination, i.e. before the widespread release of LLM assistants.\", \"questions\": \"1. How fast can you check whether a given generation is in AuxDataset? Curious if we can reduce the probability of regurgitating very long text by doing a check against an internet proxy dataset at inference time.\\n\\n2. It's very interesting that the system prompt you used below reduces reproduction length. Why do you think this works? Did you try any other system prompts outside of this and the Claude prompt? Is it because the model can internally model a circuit to determine the probability of text being in its training data? If this is the case, it's even more interesting that this works nearly just as well for the smaller Llama 8B model as the larger 405B model, especially since the 8B model is expected to have weaker internal circuits.\", \"prompt\": \"```Your primary goal is to produce content that is completely original and free from any existing text on the Internet. This means that if someone were to take a few words from your explanation and search for them online, they can not find any matching results.```\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their reinforcing feedback and thoughtful comments.\\n\\n> Can the authors reason why the 50-character limit beyond simply the argument that non-adversarial prompts reproduce less training data?\\n\\nWe updated the paper with a clearer motivation for the 50-character threshold; see the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y) for more details. We are happy to answer any follow-up questions.\\n\\n> I'm not convinced that the 50-character limit is strong enough to cause issues for LLMs reproducing training data.\\n\\nBesides the clearer motivation for 50+ character snippets, we also report distribution plots *for every possible character threshold* (e.g., Fig. 3 and 5). We further included a new Fig. 10 that contains details for every individual model.\\n\\n> Can the authors provide more details on how their manual prompts were created? Were they crowdsourced, or written by the authors themselves?\\n\\nIf not mentioned otherwise, all prompts were created by the authors (manually). We updated App. A.1 and Table 1 to make this more explicit. Our updated submission also contains our code (including raw prompt data) and a link to the full dataset.\\n\\n> Were they sourced from how authors themselves commonly use LLMs, or were they thought up in bulk at once? Were there efforts made to categorize them into a variety of prompts (beyond the three broad categories used in the paper), or maybe efforts made to check this variety after the prompts were created?\\n\\nWe started with the three text types (creative/expository/argumentative). We then brainstormed tasks that are i) based on real-world LLM usage (of the authors and other people on the internet), ii) covering a diverse set of prompts, iii) roughly equally distributed over text types. For each task, we invented concrete prompts in a batch. Our original set of prompts had task pairs that turned out to be very similar (e.g., Statement of Purpose and Motivation Letter); we replaced one instance of each pair to increase diversity/coverage.\"}", "{\"comment\": \"Thanks to the authors for their reply. The response addresses most of my concerns. However, I still find the motivation behind selecting the current substring length to be strange. Specifically, using a shorter 50-word substring will consider common phrases as potential privacy issues, even though outputting a common phrase should not necessarily be considered problematic.\\n\\nOverall, I acknowledge the authors' efforts to clarify these points and address my primary concerns. Therefore, I would like to raise my score to 6.\"}", "{\"title\": \"Thank you and follow-up\", \"comment\": \"We thank the reviewer for taking the time to read our rebuttal and providing further constructive feedback.\\n\\n**Training Data Frequency Analysis.**\\nWe agree that it's not difficult from a technical perspective, and useful for open datasets. The point is that it does not provide meaningful information for *private/unknown datasets*. Our frequency estimates would not necessarily represent any relevant information about the private training data because (1) model providers often deduplicate and (2) open-source datasets often overlap (e.g., they all contain Wikipedia).\\n\\n**Human studies.**\\nWe agree this is an interesting and meaningful additional direction, but it falls outside the scope of our work. We will mention this more explicitly in the future work section.\\n\\n**Figure 2(b) Clarification.**\\nNote that the caption already reads \\\"Notice that not all models exist in both datasets\\\". However, we will think about making this more explicit.\\n\\n**Human Baseline Comparison.**\\nWe thank the reviewer for their constructive comments. However, note that we already discuss blatant plagiarism and use human baselines to motivate that the snippets we consider are unlikely to be generated by humans. Discussing the details of each memory mechanism would require additional empirical evidence and thus falls outside the scope of our work.\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback.\\n\\n> The evaluation results might be biased because the authors cannot access the real training dataset of evaluated LLMs.\\n\\nAs we mention in the experimental setup, matches against AuxDataset indeed provide only a lower bound on the actual reproduction from models\\u2019 training data. However, this means that the actual amount of reproduction is *at least as large* as what we report, thereby supporting our results.\\n\\n> For example, for the prompt mitigating part, the authors do not demonstrate which dataset are they using.\\n\\nAs mentioned in Sec. 4, \\u201cwe only evaluate a subset of all prompts; see Appendix A.4 for details.\\u201d, which in turn states \\u201cWe do not evaluate the mitigation strategies for WritingPrompts, ELI5, and book/movie reviews due to high inference costs, but consider all other tasks in Table 1.\\u201d. The rebuttal revision also contains our code, including a script to create mitigation prompts from original prompts, and a convenience bash script to obtain LLM generations for those prompts.\\n\\n> And since they can use WildChat or LMSYS-Chat-1M, what is the motivation for collecting a new dataset?\\n\\nOur goal is to provide sound quantitative and qualitative insights. This requires a carefully controlled study setup. However, WildChat or LMSYS-Chat-1M are effectively uncurated. Hence, they allow us to show that non-adversarial reproduction exists in the wild, but *nothing else*! For example, we find that reproduction heavily depends on the task\\u2014but labeling the tasks of WildChat/LMSYS conversations is prohibitively expensive. Prompts from WildChat/LMSYS also do not reasonably allow us to do controlled comparisons with human baselines.\\n\\n> The length of substrings that are used to calculate the overlap rate is strange.\\n\\nWe made the motivation for our overlap rate threshold more clear in the updated paper and [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y). We are happy to answer all remaining questions.\\n\\n> How will the results change if changing the threshold?\\n\\nThis can be read from the distribution plots (e.g., Fig. 3a). Every x-axis value is a threshold, and the y-axis is the fraction of generated texts that contains a reproduction at or above that threshold. We also added the new Fig. 10 which provides even more fine-grained details.\\n\\n> Why do the authors not consider open-source LLMs where they can know which datasets are used for training? For example, in the Membership Inference Attack area of LLMs, researchers usually use Pythia and the Pile dataset.\\n\\nWe explicitly focus on LLMs that were fine-tuned to be conversational and not leak data, and which are used by millions of users (where reproduction has a high impact). Unfortunately, there are currently no models with fully known training datasets that match those criteria (or are comparable).\"}", "{\"comment\": \"We thank the reviewer for their thoughtful suggestions and questions.\\n\\n> Only exact matches are considered, excluding reproduction of close matches with a low hamming-distance or reproduction of semantics.\\n\\nWe agree that extending our study to a non-verbatim setting is interesting future work. However, we note that this is challenging due to the large cost of finding fuzzy matches in 10TB of internet data.\\n\\n> The results are harder to interpret due to the possibility that a large number of reproductions by both humans and models is not captured by using AuxDataset. Perhaps the extent of the problem could be estimated by running the tests on a dataset that is expanded with additional sources and comparing the resulting numbers to the current ones.\\n\\nAuxDataset indeed only provides a lower bound for reproduction. While we thank the reviewer for their constructive suggestion, we think there is no feasible approach: AuxDataset is already a 10TB snapshot of the Internet, and most publicly accessible sources overlap heavily with AuxDataset.\\n\\n> The selection of 50 character length seemed insufficiently motivated. Especially since it is different from the prior work and results in both memorised and unoriginal phrases being included.\\n\\nWe provide a clearer motivation in our [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y) and the improved discussion in the paper. We are happy to provide further justification if desired. We also note that we report every possible number of characters in our distribution plots (e.g., the x-axis of Figure 3a).\\n\\n> Are strings of length 40-60 (Line 45) or 50 (Line 129) considered?\\n\\nWe define *overlap rates* in terms of strings of length at least 50 characters. The updated paper avoids the misleading statement on Line 45.\\n\\n> Do the reproductions occur in similar contexts to the originals?\\n\\nThis is a very interesting question. Unfortunately, AuxDataset does not allow us to retrieve the context of a snippet (efficiently) because its implementation is optimized for efficient lookups (see [Nasr et al., 2023](https://arxiv.org/abs/2311.17035) for details).\\n\\n> Lines 162-164 - what is the filtering procedure for the human-written text for it to not appear on the internet? If it is filtered, how did plagiarism appear in the human-written IMDb reviews?\\n\\nWe source the human-written text from May 2024 or later, which ensures the text is neither in any model\\u2019s training data nor in AuxDataset. We do not perform any other form of filtering to avoid introducing biases. Plagiarism appears if, for example, an [IMDb review submitted in July 2024](https://www.imdb.com/review/rw9878463/) copies a [review from 2002](https://www.imdb.com/review/rw0349147/). The review from 2002 is in AuxDataset, hence the July 2024 review has a large verbatim match with AuxDataset.\\n\\n> Figure 1 could be improved by adjusting the colour scheme and ensuring the readability of the small text.\\n\\nWe thank the reviewer for this input and updated Figure 1.\"}", "{\"summary\": \"This paper investigates the problem of \\\"non-adversarial production of training data\\\", which is the problem of \\\"how often do LLM copy the training data on normal user prompts?\\\". The authors craft some tasks and used two existing user prompts datasets, and use the datasets to check for overlap between the LLM outptus and their training dataset. Since their actual training dataset is unknown, they instead of a webscale corpus: AuxDataset, as a proxy. Their result shows that 5%-15% of LLM outputs are reproduction of training data. They have tested with using prompts to mitigate this problem, and they show that prompts can reduce the reproduction rate, but can not prevent long-tail leakage of training data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The motivation of this paper is clear and specific. This paper propose a new threat \\\"non-adversarial reproduction of training data\\\".\", \"This paper provides solid experiments across a large range of LLMs.\", \"This paper creates a new task dataset, and propose a method to evaluate the human reproduction baseline, to evaluate the human reproduction baseline, they collect a human benchmark dataset.\", \"This papaer is well written, easy to follow and understand.\"], \"weaknesses\": [\"In section 6, you mentioned that it's hard to distinguish reproduction of common idioms from problematic memorization. Is it possible to estimate how much of the overlap is problematic? Cause sometimes citing a known source is not a problem, so that may not be considered a problematic reproduction.\", \"You have tested on temperature of 0 and 0.7, both are low temperature. Can you add experiments on temperature higher than 1 to see the reproduction rate under high temperature?\", \"You have tested two system prompts in section 4, can you test with more prompts?\"], \"questions\": [\"In the **human-written text dataset**, how do you make sure that the human-written texts are not actually generated by an LLM? Cause human may use LLM to generate these texts.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank all reviewers for their time and constructive feedback. In this general response, we expand on two points multiple reviewers raised: the 50-character threshold and potential contamination of human baselines. We further uploaded a revision with changes summarized as follows:\\n\\n1. As promised, we released all data (prompts, LLM completions, and matches with AuxDataset) via [https://huggingface.co/datasets/nonadvreproduction/paper-anon-data](https://huggingface.co/datasets/nonadvreproduction/paper-anon-data) and included our code as supplementary material. \\n2. We included additional models from the Gemini 1.5 and OpenAI o1 families to provide an even broader picture. \\n3. To better motivate and justify the 50-character threshold for overlap rates, we added clarifications and additional intuition throughout the paper. We encourage reviewers to check the new introduction and motivation sections. \\n4. Relatedly, we included the fraction of text containing a reproduced snippet *for every possible reproduction threshold and model* as Figure 10 in the appendix.\\n\\n**50-character threshold:** Many reviewers wonder whether the minimum snippet length of *at least 50 characters* is meaningful. First, we note that the distribution plots (Fig. 3a, 5, 7b, and the new Fig. 10\\\\) show how much text contains a reproduced snippet *for every possible threshold* (x-axis values). Hence, we do *not just* report reproduction of \\u226550-character snippets, but use a threshold primarily to provide high-level quantitative results.\\n\\nSecond, we choose the particular 50-character value to bridge existing studies of linguistic novelty (short n-grams, \\\\<10 characters) and long adversarial extraction (\\\\>200 characters). We find 50 characters to be an interesting sweet spot where reproduction transitions from common idioms/expressions to problematic reproduction of training data. We updated the paper to elaborate more on those nuances.\\n\\n**Human baselines being LLM-generated text:** Some reviewers (justifiably) wonder if our human baselines contain LLM-generated text. We cannot reliably detect or control for this. However, even if our baselines would contain a small fraction of LLM-generated text, we argue that this makes our results even stronger\\\\! Imagine we could reliably detect all LLM-generated text in our human baselines. This text will have similar overlap rates as our LLM generations. But those rates are higher than what we already measure for humans. Hence, if we rid our human baselines from LLM-generated text, the human overlap rates would be the same or even lower, and the *gap between humans and LLMs even larger*.\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for their fast response and raising their score. We will add the mentioned discussion about our human baselines in an updated version of the paper.\"}", "{\"comment\": \"We thank the reviewer for the rigorous assessment of our work.\\n\\n> which refers to the overlap between LLM outputs in response to natural prompts (mainly writing prompts)\\n\\nWe first want to highlight that we explore 15 different tasks in 3 diverse categories, and report all results balanced over tasks and categories. Hence, while there are 1000 prompts for the WritingPrompts (or ELI5) task, they count as much as (for example) the 20 prompts of the satire task.\\n\\n> The task is limited to writing. I suggest the authors consider extending it to other tasks.\\n\\nOur tasks are text-based because LLMs are trained on text and hence only reproduce text. But we make sure to use diverse tasks, from open-ended creative writing to strict formal recommendation letters. We are happy to explore more abstract tasks that can yield reproduction if the reviewer has any suggestions.\\n\\n> Generally, open-ended writing tasks are more likely to lead LLMs to recite memorized training data.\\n\\nWe find the opposite to be true, i.e., open-ended writing tasks are *less likely* to lead LLMs to recite memorized training data (see \\u201cExpository writing elicits the most reproduction.\\u201d in Sec. 3, or Figures 3, 4, and 6).\\n\\n> The authors\\u2019 conclusion seems somewhat obvious, as LLMs are explicitly fit on internet text. Intuitively, LLMs are more likely to produce text resembling their training corpus than humans.\\n\\nOur better motivation in the rebuttal revision and [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y) should make the significance of our findings more clear. Moreover, we would like to ask the reviewer for the reasoning behind their intuition, as we find it not immediately obvious (humans, after all, are also \\u201ctrained\\u201d on human text). Either way, our study does the work to support this intuition with sound empirical evidence.\\n\\n> Building on the first point, the authors propose using prompt design to mitigate overlap. However, the method and its underlying principles lack significant innovation.\\n\\nWe do not propose any defense against non-adversarial reproduction. Instead, we include a study of prompting as a mitigation *precisely because it is an obvious strategy* and highlight that this *simple approach is insufficient*. We are also happy to cite existing work that uses prompting as a defense against reproduction, as we are not aware of any.\\n\\n> The authors appear to conflate reproduction of internet text and training data. These are not equivalent, as the training data depends on the model's degree of fit. Especially when using a simulated dataset, this discrepancy may be amplified.\\n\\nWe read this feedback as \\u201cAn LLM\\u2019s output might match internet text in AuxDataset just by chance and not because of memorization.\\u201d However, if most matches in our paper were due to chance, we would expect overlap rates between LLMs and humans to be very similar (which they are not).\\n\\n> I suggest that the authors consider using some training data detection methods (e.g., \\\\[1\\\\]) to assist in identifying training corpus when exploring reproduction of training data.\\n\\nMembership inference for LLMs suffers from severe issues and cannot yet reliably prove that a model was trained on some particular data (e.g., [Zhang et al., 2024](https://arxiv.org/abs/2409.19798)) and is not directly relevant for our study. We hence rely on the established AuxDataset approximation by [Nasr et al., 2023](https://arxiv.org/abs/2311.17035).\"}", "{\"summary\": \"The paper explores to what degree LMs reproduce their training data in natural circumstances, i.e. settings where there is no adversarial pressure to reproduce the text. Human-written text is used to compare the extent to which the completions have exact matches in AuxDataset. The results show that unintended reproduction occurs more if the text is generated by models instead of humans, and if the task is expository, e.g. tutorials. Two system prompts are investigated for mitigating unintended reproduction, yielding moderate success.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1) The paper is well-written and accessible. The figures effectively convey the key findings.\\n2) The empirical results are extensive and presented in a way that is easy to interpret. The authors test the most relevant models (including GPT, Claude, and Llama) and use varied datasets including real conversations.\\n3) The experiments are well-designed, the knowledge cutoffs and dataset limitations are addressed in the text.\", \"weaknesses\": \"1) Only exact matches are considered, excluding reproduction of close matches with a low hamming-distance or reproduction of semantics.\\n2) The results are harder to interpret due to the possibility that a large number of reproductions by both humans and models is not captured by using AuxDataset. Perhaps the extent of the problem could be estimated by running the tests on a dataset that is expanded with additional sources and comparing the resulting numbers to the current ones.\\n3) The selection of 50 character length seemed insufficiently motivated. Especially since it is different from the prior work and results in both memorised and unoriginal phrases being included.\", \"questions\": \"1) Are strings of length 40-60 (Line 45) or 50 (Line 129) considered?\\n2) Do the reproductions occur in similar contexts to the originals?\\n3) Lines 162-164 - what is the filtering procedure for the human-written text for it to not appear on the internet? If it is filtered, how did plagiarism appear in the human-written IMDb reviews?\", \"nitpicks\": \"1) Figure 1 could be improved by adjusting the colour scheme and ensuring the readability of the small text.\\n2) The use of \\u201caligned\\u201d in section 5 could be more precise. While Christiano et al., 2017 and Ouyang et al., 2022 describe alignment as a continuous objective of RLHF fine-tuning, Nasr et al. (2023) simply uses \\u201caligned\\u201d to describe models that have undergone RLHF. To avoid this ambiguity, more specific terms like \\u201cRLHF-tuned\\u201d or \\u201calignment fine-tuned\\u201d could be used to describe these models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper thoroughly studies non-adversarial memorization behaviors in LLMs. The reviewers largely vote for acceptance, and I agree with the reviewers. The reviewers raised a number of concerns which did not mitigate their positive overall feelings. For example, multiple reviewers brought up the choice of 50 characters for the threshold, but the authors pointed out that their plots are not at all restricted to this threshold choice, and the threshold was just employed for illustrative purposes. Reviewers also asked the authors to compute frequency of reproduced sequences in the training data, but this is not known since people preprocess their data, often deduplicating. Reviewers mentioned that text from various sources, for example Reddit, could be LLM generated, which indeed is a possible confounder, but it does not invalidate the results and is an interesting topic for the future. Similarly, I do not think it is a problem that the paper focuses on only exact matches although I do think that is a limitation that should be explored in future work. Finally, the authors have added further experimental details which address a number of points raised by reviewers. Overall, the feedback does not look disqualifying to me, and the reviewers and I broadly view this paper favorably.\", \"additional_comments_on_reviewer_discussion\": \"The authors comprehensively addressed reviewer feedback, including many new experiments and paper edits, and they also published their data.\"}", "{\"comment\": \"We thank the reviewer for their careful evaluation of our work and the constructive feedback.\\n\\n> In section 6, you mentioned that it's hard to distinguish reproduction of common idioms from problematic memorization. Is it possible to estimate how much of the overlap is problematic? Cause sometimes citing a known source is not a problem, so that may not be considered a problematic reproduction.\\n\\nThis is a great question. Determining whether memorization is problematic is highly context-dependent and varies based on the nature of the reproduced text (e.g., reproducing copyrighted material or private information raises different concerns than reproducing Wikipedia content). We believe most of these considerations are outside the realm of science. However, our analysis of reproduction at longer thresholds (e.g., in Figure 3\\\\) and long-tail events demonstrates that models can produce extended verbatim copies that are likely to be problematic in many contexts.\\n\\n> You have tested on temperature of 0 and 0.7, both are low temperature. Can you add experiments on temperature higher than 1 to see the reproduction rate under high temperature?\\n\\nWe aim to explore chat models in production setups, which are typically used with temperature between 0.7 and 1.0 (to the best of our knowledge). Hence, we do not consider temperatures above 1.0, and do not study more values between 0 and 1 due to the high inference cost. However, based on the experiments in App. B.1, we do not expect large differences.\\n\\n> You have tested two system prompts in section 4, can you test with more prompts?\\n\\nWe wanted to study two cases, i) a typical assistant prompt to see how reproduction behaves in real-world deployments of chatbots (e.g., Claude or ChatGPT), and ii) an extremely specific prompt to understand what the largest possible reduction of reproduction via prompting could be. We do not explore different instantiations of those two settings because they suffice to show a large difference and inference is costly.\\n\\n> In the human-written text dataset, how do you make sure that the human-written texts are not actually generated by an LLM? Cause human may use LLM to generate these texts.\\n\\nThis is a great observation, and we can indeed not rule out that a small fraction of humans partially relied on LLMs. However, as we discuss in the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y), this does not devalue our results (but makes them even stronger).\"}", "{\"summary\": \"This work discusses the case of unintentional reproduction of training data by large language models. While most of the literature discusses an adversarial nature of prompting to extract training data, this work tries to quantify how frequently this influence happens in a non-adversarial situation. One of the findings of the work is that non-adversarial reproduction is much higher in expository tasks than in creative ones, and even prompting techniques, while they can reduce the average reproduction rate, are not sufficient to prevent longer sequences from appearing. One of the highlight results of this work is that about 15% of the text output by popular conversation language models overlaps with short snippets of text on the internet, much higher than baseline rates by humans on the same task.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Overall, this paper was a joy to read. I found it to be very thoughtfully written. I routinely ended up in situations where I had a particular thought experiment in mind, and the next section showed just that set of ablations or experiments.\", \"I liked Figure 4(b) which shows how reproduction strongly depends on the task. This consolidates an important finding/hypothesis. At a high level, while the paper reports that 8-15% of LLM-generated text overlaps with existing online snippets, it goes further to analyze the types of overlaps. This also highlights the complexity of defining \\\"problematic\\\" reproduction.\", \"I found the experiment on extracting Quotations quite interesting, in particular, because it shows incorrect attribution of the quote.\", \"Distribution of overlap lengths: A small percentage of outputs contain very long reproduced sequences. This long-tail phenomenon suggests that even with low average reproduction rates, LLMs can still pose risks in specific cases.\"], \"weaknesses\": [\"Frequency of Reproduced Sequences: The paper could benefit from clarifying how often the reproduced sequences appear within their training data proxy, AUXDATASET. Understanding whether these snippets are rare or commonly encountered would help contextualize the reproduction risks.\", \"Justification of 50-Character Threshold: The choice of a 50-character threshold to define reproduced sequences is not fully justified. In particular, this is quite different from past work. While some examples in the Appendix suggest that 50 characters is a meaningful number, I believe most examples highlight that such sequence lengths can be so common in the natural language distribution that their reproduction does not matter. Further explanation would help readers assess whether this threshold adequately captures the difference between common phrases and more problematic reproductions.\", \"Data in Figure 2(b): Figure 2(b) appears to have only partial bar plots for some models (Llama and GPT), making the comparison across models less robust. Or am I missing something here?\", \"Overall, I am constantly battling between thinking that 50 characters is too less, and then seeing the argument that these reproduction rates are much higher than humans. This makes me wonder if humans are the right baseline here. Would a human with a passage (RAG style reading comprehension) be a better baseline? There is a qualitative dichotomy here: the 50 characters do not feel meaningful when visualized, yet stay higher than what a human would reproduce.\"], \"questions\": \"Please refer to sections above and answer the questions\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarification on sentence\", \"comment\": \"We apologize for the slightly weird phrasing. In other words:\\n\\nWe manually investigated all three human baselines (IMDb reviews, WritingPrompts, ELI5) regarding blatant human plagiarism. We did find a lot of plagiarism for IMDb reviews, but we did not find any instances of blatant human plagiarism for WritingPrompts or ELI5.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your thoughtful responses to my review. Your paper has sparked important discussions about the nature of text reproduction in language models, and I appreciate your careful attention to the concerns raised.\\n\\n**Training Data Frequency Analysis**\\nThis is not hard for open datasets. Check out: https://huggingface.co/spaces/liujch1998/infini-gram\\n\\n**50-Character Threshold and User Perception**\\nThe threshold choice remains a crucial concern that deserves more rigorous investigation. This is particularly important given the sensitive nature of text reproduction and its implications for content creators and users. I strongly recommend:\\n- Conducting user studies to understand how different stakeholders (content creators, users, domain experts) perceive text reproduction at various lengths\\n- Including controlled experiments where participants evaluate the \\\"problematic\\\" nature of reproduced text across different thresholds and contexts\\n- Incorporating findings from the NLP community's established methodologies for human evaluation and annotation\\n\\nThe NLP community has a rich history of using human studies to validate important thresholds and metrics. This would be an excellent opportunity to bring that tradition to this important problem rather than relying solely on computational justifications.\\n\\n**Figure 2(b) Clarification**\\nThank you for explaining the partial nature of the bar plots. To prevent confusion can you add a note in the figure caption explaining the data availability?\\n\\n**Human Baseline Comparison**\\nYour clarification about the distinction between prompt copying and training data reproduction is well-taken. To strengthen this point:\\n- Consider adding a diagram illustrating the different types of \\\"copying\\\" (human memory vs. LLM training vs. RAG)\\n- Include a discussion of how different memory mechanisms might affect reproduction patterns\\n\\n\\n\\nYour paper makes important contributions to understanding non-adversarial reproduction in language models, and your responses have clarified several key points. The methodological choices, while sometimes difficult to justify perfectly, seem reasonable given the constraints of studying this phenomenon. I particularly appreciate the thorough analysis of task-dependent reproduction rates and the careful consideration of what constitutes \\\"problematic\\\" reproduction.\"}", "{\"title\": \"Reviewer response\", \"comment\": \"Thank you to the authors for answering all my questions! This was a great rebuttal that hit all the right points.\", \"summary\": [\"**I'm in favor of accepting this paper and would like to see it at ICLR**. The contribution is important. I'm raising my score to an **8** and confidence to a 4 in this assessment, though I feel this paper is closer to a **7** if the score existed.\", \"This is an empirically driven work, though that is common in the field. Ultimately I would have liked to see more explanation and analysis of the phenomenon rather than just pointing out its existence.\"], \"agreement_with_authors\": \"1. 50 character-limit: This is a fair for an empirically driven threshold, ultimately the paper does communicate other thresholds as the authors pointed out. Figure 10 is nice to have, interesting that o1 has fatter tails. I believe it's just an aesthetic debate whether 50 was the best cutoff to emphasize.\\n2. Lack of qualitative exploration: The authors make good point that qualitative exploration deserves its own future work. Thanks for releasing the full dataset, I hope it inspires further work in the field.\\n3. AI contaminated human baselines: The authors raise a strong argument that I agree with. The gap between humans and AI would be even larger if human generated text was completely AI-free. However, this caveat, that the measured human baseline is an upper bound on true human reproduction rate, should be discussed in the paper, even if it is ultimately in favor of your results. The paper's completeness would be enhanced if there was an AI-free human baseline though.\", \"wish_there_was_more\": \"1. The system prompt. I understand the practical limitation is that defenses against text reproduction is not the main focus of the paper, but the proposed prompt engineered defense is unsatisfying. Using only a single prompt feels like a very ad-hoc defense, and the scientific contribution for this section is weak.\"}", "{\"comment\": \"Thank you for the response. Could you please provide clarification about this sentence \\u201cHowever, we did perform a manual investigation (as for IMDb reviews), which did not reveal any significant cases of blatant plagiarism (as opposed to IMDb reviews).\\u201d It seems like a typo. Is \\\"for\\\" and \\\"opposed\\\" to being used correctly here?\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback and contextualization of our work.\\n\\n> 50 character cutoff may overestimate regurgitation: The authors acknowledge this limitation, but it is difficult to differentiate based on character length alone whether the data is truly regurgitated off the internet or just due to it being a common phrase, especially when the length is around 50 characters.\\n\\nWe updated the paper to make the motivation for the 50-character threshold more clear, please see the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y) for details.\\n\\n> Additional analysis to estimate a numerical proportional breakdown between these two categories would make the paper more rigorous. There is far less doubt about text reproduction vs. common phrases past the 100-150 character point.\\n\\nWe agree that an estimate of this proportion would be valuable. However, classifying reproduced text as short facts/language constructs vs. problematic text is a challenging task that deserves its own future work. In addition, we hope to provide more confidence through the new Fig. 10, which provides a detailed picture of reproduction *for every possible number of characters* (x-axes).\\n\\n> AI contaminated human baselines\\n\\nWe cannot rule out a very small fraction of contamination. However, as we discuss in the [general response](https://openreview.net/forum?id=590yfqz1LE&noteId=LBuTkeYE0Y), this is not a problem (and might even strengthen our results).\\n\\n> I would find it interesting if you can also evaluate the reproduction length distribution of human data known to be mostly free from AI contamination, i.e. before the widespread release of LLM assistants.\\n\\nWhile using such human baselines would be ideal, it is unfortunately practically infeasible. The only way to do this would be with a snapshot from the Internet from before the first LLM release. However, we cannot obtain such a snapshot. Even if we could, this snapshot would severely underestimate reproduction of LLMs (which were trained on much more recent text).\\n\\n> How fast can you check whether a given generation is in AuxDataset? Curious if we can reduce the probability of regurgitating very long text by doing a check against an internet proxy dataset at inference time.\\n\\nThis is an interesting suggestion. While lookups in AuxDataset require several TB of RAM and a long startup time, the latency per inference request is modest. The whole process could be made much more efficient (at the expense of a small false-positive rate) by using a bloom filter. One issue is that this form of response filtering enables users to directly test if a given piece of text is in the model\\u2019s training data (see, e.g., [Debenedetti et al., 2023](https://arxiv.org/abs/2309.05610)), a capability that model providers likely don\\u2019t want to expose. Nevertheless, some services employ a form of filtering (e.g., [Github Copilot](https://docs.github.com/en/copilot/managing-copilot/managing-copilot-as-an-individual-subscriber/managing-copilot-policies-as-an-individual-subscriber#enabling-or-disabling-suggestions-matching-public-code)).\\n\\n> It's very interesting that the system prompt you used below reduces reproduction length. Why do you think this works? Did you try any other system prompts outside of this and the Claude prompt? Is it because the model can internally model a circuit to determine the probability of text being in its training data?\\n\\nWe thank the reviewer for this food for thought. First, we did not explore significantly different system prompts beyond what we report, because we only want to highlight that naive prompting does not mitigate non-adversarial reproduction. Second, we do not have a clear hypothesis or evidence. But this would be very exciting future work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
58lbAsXCoZ
Neural Fluid Simulation on Geometric Surfaces
[ "Haoxiang Wang", "Tao Yu", "Hui Qiao", "Qionghai Dai" ]
Incompressible fluid on the surface is an interesting research area in the fluid simulation, which is the fundamental building block in visual effects, design of liquid crystal films, scientific analyses of atmospheric and oceanic phenomena, etc. The task brings two key challenges: the extension of the physical laws on 3D surfaces and the preservation of the energy and volume. Traditional methods rely on grids or meshes for spatial discretization, which leads to high memory consumption and a lack of robustness and adaptivity for various mesh qualities and representations. Many implicit representations based simulators like INSR are proposed for the storage efficiency and continuity, but they face challenges in the surface simulation and the energy dissipation. We propose a neural physical simulation framework on the surface with the implicit neural representation. Our method constructs a parameterized vector field with the exterior calculus and Closest Point Method on the surfaces, which guarantees the divergence-free property and enables the simulation on different surface representations (e.g. implicit neural represented surfaces). We further adopt a corresponding covariant derivative based advection process for surface flow dynamics and energy preservation. Our method shows higher accuracy, flexibility and memory-efficiency in the simulations of various surfaces with low energy dissipation. Numerical studies also highlight the potential of our framework across different practical applications such as vorticity shape generation and vector field Helmholtz decomposition.
[ "Fluid simulation", "Implicit Neural Representation", "Exterior Calculus" ]
Accept (Poster)
https://openreview.net/pdf?id=58lbAsXCoZ
https://openreview.net/forum?id=58lbAsXCoZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zGMaqPxEgd", "vEzxfVYZbm", "r0ezDk7iz3", "qLCxopi6Pb", "onmhpHp4a9", "oniq4TTdoi", "k8S1Fl3pEM", "job2VbCZHF", "hj6u5gYAcR", "aDouOQbASR", "VuX4HNTZpW", "UuvFPAnVh9", "SRH4ZYbvjp", "MnNerFEfmI", "IgZDNTG1JN", "Fxg8uwTHgW", "CF5LN6fv38", "BPUpKwHw1I", "7l0e99HfCO", "2FZiWG02DF" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730586056312, 1732251620880, 1730210518264, 1731825377024, 1732226225882, 1732263941089, 1731825339963, 1737523718497, 1731825311722, 1734577054350, 1730699029920, 1731825298681, 1732231331953, 1732263899693, 1732263916034, 1731825318340, 1732251877971, 1730658181055, 1731825331216, 1731825263938 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_dCvp" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_V8TK" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_V8TK" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_Y5ci" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Area_Chair_srav" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_Y5ci" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_dCvp" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_V8TK" ], [ "ICLR.cc/2025/Conference/Submission5666/Reviewer_Gjy9" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ], [ "ICLR.cc/2025/Conference/Submission5666/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a framework for fluid simulation on surfaces that is divergence-free by construction. This is done by using exterior calculus tools, in special the definition of the divergence based on the Hodge star operator and the exterior derivative, the property that the Hodge star is (up to a sign) its own inverse and the nilpotent property of the exterior derivative. The Closest Point Method is used to apply those tools on generalized surfaces, making a natural link with Riemannian geometry, and enabling the evaluation in the tangent space around samples in the surface. With those tools it is possible to transit between the surface and $R^3$ as needed, in special for advection which can be done considering the Riemannian metric of the manifold.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"To be honest, I had a lot of fun reviewing this paper. Unless other reviewers flag major flaws that I could not find, I think it is ready for acceptance.\", \"It is a very good example of when a simple and elegant core idea based on strong guarantees enables a lot of very interesting questions and consequences. The core idea of using the nilpotent property of the external derivative and the self inverse property of the Hodge star operator to force a divergence free vector field on a surface is elegant and foment all the paper discussion.\", \"The loss formulation is as expected, very intuitive.\", \"As far as I know, It is the first method that converges in implicit surfaces.\", \"The method does not rely on training data.\", \"Evaluation is robust. The idea of starting with analytic examples where ground truth is easier to evaluate is good.\", \"Related work section cites every paper I could think of. The care with the citation of classic papers (even for datasets) is notable.\", \"Mathematical notation is very clean. It easy to see that there is a lot of effort with notation polishing.\", \"The paper makes use of very good references for background Math.\"], \"weaknesses\": [\"I will point some minor weaknesses that could be fixed to improve the paper.\", \"(1) An image depicting equation (4) and another showing the advection process would greatly improve the friendliness of the paper since both processes are very geometric. That would make the paper be appreciated by a broader audience.\", \"In the image for equation (4) it is sufficient to show the neighborhood of the surface, the mapping $j$, the mapping $cp*$, the vector resulting from the gradient and the tangential vector acquired from cross product of the gradient with the normal.\", \"For the advection image it is sufficient to depict the push forward (pull back) function in action and the inner product using the Riemannian metric.\", \"(2) The presentation could be more friendly by giving some intuition along the text. I will point some places I think this kind of intuition would be beneficial.\", \"Line 199: could say that the even though the divergence may be expressed using different k-forms, the definition of div(v) is the 0-form version resulting in a scalar function.\", \"Line 299 (equation 4): could say that $cp^*\\\\sigma$ is a notation abuse because $cp^*$ expect a k-form but $\\\\sigma$ is a 0-form. Also that the composition with $j(x)$ is to restrict the computation to the surface, the gradient is to acquire a vector field and the cross product is to acquire a tangent vector field. A reference to the proposed image would also be good here.\", \"Line 234 (equation 5): could say that that vorticity expression considers the rotation axis equals to the normal because it is evaluated on the surface. Then the vorticity may be represented as a scalar field.\", \"Line 257: could say that the expression is a neighborhood extension of the surface along the normal field.\", \"Line 320 (equation 13): could say that the $<. , .>_p$ notation is an inner product considering the Riemannian metric of the manifold of the tangent space at point p. A reference to the proposed image would be good here.\", \"Line 327 (equation 15): could say that the inner products are the first-order approximation of the push forward function.\", \"Line 344: that paragraph could say that the harmonic components do not contribute to the vorticity and that is the reason why the additional harmonic network is needed. Could also say that it is constant along the simulation because it is associated with the topological structure of the surface, which does not change over time.\", \"(3) This paper deserves an acronym so it may be more easily referenced in the future by other researchers. I advise the authors to think about changing the title to include a creative acronym.\"], \"questions\": \"(1) Why introducing $f$ in equation (8) instead of using $\\\\sigma$ directly?\\n\\n(2) I think a $t$ subscript is missing in equation (8) ($\\\\Phi_t$).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Honestly I\\u2019m a bit surprised by the other reviews, specially the one that rated this submission a 10. I strongly believe that this manuscript, in its current form, is not ready for publication. I think this paper would benefit from another submission cycle, carefully reassessing the underlying assumptions about the method. I restate that my major concerns are related to **paper usefulness** (the proposed method is extremely slow (17 hours for a 2D mesh!!) and has little application), **exposition** (poor English, several typos and I just noticed a few on my original rebuttal), **lack of proper evaluation** (few examples, some of them with noticeable artifacts, quantitative evaluation is poor), **little contributions** (its widely known that taking the curl of the streamfunction yields a div-free velocity field, plugging CPM into a neural representation is not a major breakthrough) and several important **related works are missing**. Moreover, employing streamfunctions to solve flow equations difficult boundary conditions (not addressed at all in the paper), and it is a reason why the majority of recent approaches in Computer Graphics are using impulse-based formulations for energy preservation.\", \"let_me_address_a_few_inconsistencies_on_the_rebuttal\": \"_\\\"To clarify, we do not attribute the independence from mesh quality solely to the implicit neural representation. In fact, there exist many methods that do not incorporate the Closest Point Method face challenges related to mesh quality, and our goal is to improve these approaches.\\\"_\\n\\nThere are no examples that support the claim that this paper improves previous approaches. A simple solver (e.g. Covector Fluids) implemented on a GPU would yield much faster results. To properly compare against previous approaches, the authors should increase the resolution of a baseline solver until it reaches a performance that is comparable to the proposed approach. I bet that a common solver would require so many variables to be that slow (on a GPU) that the application would crash with out of memory issues before reaching that point.\\n\\n_\\\"The paper referred here is mainly designed to deal with the numerical dissaption caused by the adveciton and projection. But none of them are designed for the surface flow (even not with a surface flow demo). While their methods could theoretically be extended to surface flows, the additional operators they propose would require careful adaptation and design for surface-specific contexts, which is beyond the scope of this work.\\\"_\\n\\nCovector fluids would work on a surface mesh, they have 2D examples. That's the beauty of DEC: it really abstracts the domain representation, one has only to implement the exterior derivative properly (which is easy). I suppose that Covector Fluids did not show examples on a surface because of the limited application of such flows. \\n\\n_\\\"Our approach requires the use of an additional tool based on the Closest Point Method (CPM), as described in [7], to theoretically identify the stream function and other necessary differential forms. \\\"_\\n\\nCPM is not necessary to identify the streamfunction as a differential form. CPM is a way of discretizing variables, it has nothing to do with the divergence-free property or with the identification of differential forms.\\n\\n_\\\"In contrast to Chorin's operator splitting scheme, our approach guarantees divergence-free behavior in a functional manner. In comparison, the advection-projection method introduces numerical diffusion, as noted in prior works (Chang et al., 2002 [8]; Elcott et al., 2007 [5]). While we employ a first-order optimizer for the non-linear problem, which may result in numerical inaccuracies, our method is still a better alternative for the classic ones both theoretically (as a form of Lie advection for divergence free field) and empirically (in the energy presevation studies).\\\"_\\n\\nYou are referencing papers that are **20 years old** to say **that operator splitting introduces dissipation and therefore it wont work**? What about **all the other impulse-based formulations** that were recently published? Are they all dissipative and inefficient as well? Take some time to reevaluate your perspective here.\"}", "{\"summary\": \"This paper proposes an implicit neural representation to improve solvers that simulate flows on geometric surfaces through geometric adaptivity. The authors propose a neural physical simulation framework to construct a parameterized vector field on surfaces using exterior calculus formalism. Through a Closest Point Method, it is proposed an implicit neural network representation that is able to maintain a divergence-free property intrinsically. Divergence-free is an important property of Navier-Stokes solvers, and strictly enforcing them is a challenging task. Furthermore, the authors claim that the proposed approach is able to accurately preserve the energy of the flow as time advances.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Proposing alternative representations to the standard discretizations (e.g., grid/meshes) for solving PDEs is a very interesting and challenging topic of research. The authors propose a method that considers specific intricacies of the PDE solution when employing neural representations, along with desired properties that can potentially be satisfied in a continuous fashion (e.g., the divergence-free condition).\", \"weaknesses\": [\"Unfortunately this paper is clearly below the ICLR quality acceptance bar. My major concerns are as follows:\", \"Poor exposition along with several typos which make the paper hard to understand. For example. the structure of Section 3.1 is composed of fragmented phrases forming very short paragraphs, making it hard to follow. Several typos and confusing phrasal structures (L11: \\u201cIncompressible Euler fluid on the surface\\u201d, L20: \\u201cWe contribute a neural physical simulation framework on the surface with the implicit neural\\u201d, L240: \\u201cIn the meanwhile\\\" to name a few) greatly compromise the quality of the paper.\", \"The main idea of the paper is based on wrong assumptions. The poster \\u201cClosest Point Exterior Calculus\\u201d, in which the paper is heavily based, already offers a solution that is independent of the mesh quality. This invalidates one of the main motivations of this submission that previous approaches are dependent on mesh quality, and thus an implicit neural representation is required. Moreover, the assumption that storage is a limiting factor on solvers is also incorrect, since a solver usually has to store a single time-step of the represented variables for advancing the simulation state. The presented results also show very modest resolutions.\", \"There are missing references and/or previous methods are not thoroughly considered, leading to an outdated methodology proposition. Recent approaches (\\u201cCovector Fluids\\u201d, \\u201cImpulse Particle In Cell\\u201d, \\u201cFluid Simulation on Neural Flow Maps\\u201d, \\u201cEulerian-Lagrangian Fluid Simulation on Particle Flow Maps\\u201d and \\u201cLagrangian Covector Fluid with Free Surface\\u201d to name a few) adopt structure preserving integrators by considering the deformation of the flow map during advection. This is ignored by the proposed advection method, which has a rather lengthy description in the paper. Lastly, Elcott et al, 2007b does not suffer from instabilities as it is mentioned in the manuscript and modern structure preserving solvers (\\\"Impulse Particle In Cell\\u201d) are able to accurately advect velocities without major stability issues.\", \"The paper partially focuses on showing mathematical proofs that are known by the exterior calculus community (divergence free vector fields on surfaces), which make the described theory not so relevant as new theoretical contributions. The authors could just reference relevant discrete exterior calculus material or move the lengthy mathematical descriptions to the Appendix.\", \"The paper should have been focused on more relevant aspects of the implicit neural representation, such as network structure, how to properly tackle high-frequencies of the implicit neural field, how to make the training/evaluation process efficient (e.g., check \\u201cInstant Neural Graphics Primitives with a Multiresolution Hash Encoding\\u201d), etc.\", \"The authors mention that pressure projection (usually the most expensive part of a fluid solver) is not required by their approach. However, they solve a non-linear optimization problem iteratively with a simple ADAM gradient descent approach. This approach is way less efficient than traditional operator splitting, as evidenced by the timings shown in Table 1 (16h for 80k vertices is a very inefficient timing for the considered resolution). Lastly, there seems to be some high-frequency \\u201cringing\\u201d artifacts generated by the proposed method in Figure 3 which are not present in ground truth or in the HOLA-7 results.\", \"These are some of the reasons that justify my low score for this paper. I suggest the authors to rethink their approach before resubmitting the manuscript.\"], \"questions\": [\"Did the authors explore alternative network designs for representing the implicit neural fields (such as \\\"Instant Neural Graphics Primitives with a Multiresolution Hash Encoding\\u201d) ?\", \"How does the method fares in simulations where regions of turbulence are highly concentrated? Is the proposed adaptivity property working as expected?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Reference:**\\n\\n[5]. Elcott S, Tong Y, Kanso E, et al. Stable, circulation-preserving, simplicial fluids[J]. ACM Transactions on Graphics (TOG), 2007, 26(1): 4-es.\\n\\n[6]. Azencot O, Wei\\u00dfmann S, Ovsjanikov M, et al. Functional fluids on surfaces[C]//Computer Graphics Forum. 2014, 33(5): 237-246.\\n\\n[7]. Li M, Owens M, Wu J, et al. Closest Point Exterior Calculus[M]//SIGGRAPH Asia 2023 Posters. 2023: 1-2.\\n\\n[8]. Chang W, Giraldo F, Perot B. Analysis of an exact fractional step method[J]. Journal of Computational Physics, 2002, 180(1): 183-199.\\n\\n[9]. Li Z, M\\u00fcller T, Evans A, et al. Neuralangelo: High-fidelity neural surface reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 8456-8465.\"}", "{\"comment\": \"Thank you for your reply. I found the discussion on **Q6/R6** very insightful.\\n\\nRegarding **Q2/R2**, while I understand the rationale for downplaying the contribution to address reviewer concerns, I believe it is entirely appropriate to highlight the application of CPM to neural representations. Personally, I consider CPM to be an excellent formalism for defining differential forms in modern neural representations, which I think the ML community would benefit greatly from. That said, the modified contribution is fine as it is.\\n\\nHowever, I respectfully disagree with reviewer V8TK\\u2019s reasoning. While it is true that there are previous works on CPM (e.g., [Li et al., 2023] and [King et al., 2024], and [Li et al., 2023] is a *poster*), I don\\u2019t believe this invalidates the significance of bringing CPM into this context. This is a unique and valuable contribution, and the reasoning used to justify such low scores seems *overly harsh*. CPM aligns naturally with modern neural representations due to their \\\"volumetric\\\" and \\\"dense\\\" nature, unlike explicit representations of CPM, which typically require a **dense grid** with spatial data structure.\"}", "{\"comment\": \"Thank you for your feedback. I agree that emphasizing the application of CPM to neural representations is important, as it provides valuable insights to the ML community. We again appreciate your perspective and support.\"}", "{\"comment\": \"> **Q5:** The paper should have been focused on more relevant aspects of the implicit neural representation, such as network structure, how to properly tackle high-frequencies of the implicit neural field, how to make the training/evaluation process efficient (e.g., check \\u201cInstant Neural Graphics Primitives with a Multiresolution Hash Encoding\\u201d).\\n\\n**R5:** Our work primarily focuses on the neural parameterization of surface divergence-free vector fields. The first step is to model the field using a neural network, after which we can explore further optimizations, such as improving network design, enhancing structure, or increasing efficiency using other implicit neural representation (INR) architectures (e.g., Neural Graphics Primitives, NGP). These directions are part of our future work and are discussed in detail in Appendix F.3.\\n\\n\\n> **Q6:** The authors mention that pressure projection (usually the most expensive part of a fluid solver) is not required by their approach. ... Lastly, there seems to be some high-frequency \\u201cringing\\u201d artifacts generated by the proposed method in Figure 3 which are not present in ground truth or in the HOLA-7 results.\\n\\n**R6:** We acknowledge the limitation of our method in terms of time consumption, which has been explicitly discussed in both Sec. 6 and Appendix F.3. In contrast to Chorin's operator splitting scheme, our approach guarantees divergence-free behavior in a functional manner. In comparison, the advection-projection method introduces numerical diffusion, as noted in prior works (Chang et al., 2002 [8]; Elcott et al., 2007 [5]). While we employ a first-order optimizer for the non-linear problem, which may result in numerical inaccuracies, our method is still a better alternative for the classic ones both theoretically (as a form of Lie advection for divergence free field) and empirically (in the energy presevation studies). Regarding the \\\"ringing effect,\\\" we are uncertain whether it occurs in our method. For implicit neural surface representations (INSR), it might arise due to the Siren parameterization, and certain periodic patterns are exhibited.\\n\\n\\n> **Q7:** Did the authors explore alternative network designs for representing the implicit neural fields (such as \\\"Instant Neural Graphics Primitives with a Multiresolution Hash Encoding\\u201d) ?\\n\\n**R7:** In our work, we focused on using MLP and Siren for analytic operator computation. The use of Neural Graphics Primitives (NGP) is mentioned in Appendix F.3 as a potential future direction to improve efficiency. However, incorporating higher-order differential operators into NGP requires more careful design, as highlighted in Li et al. [9].\\n\\n\\n> **Q8:** How does the method fares in simulations where regions of turbulence are highly concentrated? Is the proposed adaptivity property working as expected?\\n\\n**R8:** Our method may introduce some numerical dissipation in highly turbulent regions due to limitations in model representation (e.g., maximum frequency in Siren) and optimization. This can result in errors that reduce the turbulence intensity compared to the desired level. The adaptivity discussed in our paper primarily refers to the ability to handle different geometry representations, which has been evaluated through numerical studies using analytic, mesh, and INR cases. However, turbulence is influenced not only by geometry but also by the characteristics of the vector field itself. We believe that the adaptivity property is not directly related to turbulent simulations.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for the careful review and constructive advice. To address your questions and concerns:\\n\\n> **Q1:** The English in the current version of the paper needs to be improved.\\n\\n**R1:** For any writing issues are present, we assure you that we will conduct a thorough proof reading and optimize the structure and descriptions throughout the paper. \\n\\n\\n> **Q2:** Compared to the actual straigth forward application of the CP-EC to the case of flow simulation on surfaces, the paper seems cumbersomely long and is also not as clear to read as the recent papers on the topic referenced in the paper, whose presentation is clearer and more concise. Maybe the authors can try to improve on that.\\n\\n**R2:** Thank you for your valuable advice. We include more detailed preliminaries to assist readers who may not be familiar with differential geometry. For CP-EC and divergence-free design part, the main content introduces only the necessary concepts and notion, with the proofs provided in Appendix B. We will stress the point, allowing readers with a foundation in the topic to skip the preliminaries if desired.\\n\\nRegarding the advection part, we believe it is essential to explain how Lie advection works, as this understanding is crucial for interpreting our loss design. We will refine the presentation to enhance clarity and ensure it aligns with the quality and precision of the referenced paper.\\n\\n\\n\\n\\n> **Q3:** How do you do the interpolation of the pulled forms? In the CP-EC poster the authors recommended the Cubic Lagrangian. How does this interpolation affect the divergence-free property of the velocity field? Were there any numerical problems? Is it possible to do an ablation study on this point?\\n\\n**R3:** Actually, interpolation for the pull forms is not required in our approach. In the classical CP-EC method, interpolation is necessary because the functional is represented on grid points. However, in our method, we utilize a continuous network representation, where interpolation can be considered included. The numerical issues encountered in our approach might be caused by inadequate fitting rather than interpolation errors. \\n\\nAdditionally, to elaborate further, if the classical method were used with interplation for CP-EC, divergence-free behavior would only be approximately preserved, with errors arising from the discretized operator computation.\"}", "{\"metareview\": \"In this paper, the authors propose a neural physical simulation framework on surfaces using implicit neural representations. Their method constructs a parameterized vector field utilizing the DEC and the CPM on surfaces, achieving divergence-free simulation across different representations. Additionally, the authors adopt a covariant derivative-based advection process for surface flow dynamics and energy preservation. In their experiments, they demonstrate the lowest simulation error compared to other methods while maintaining similar memory consumption.\\n\\nHowever, there are several limitations to the proposed method. First, the computational time cost is very high, which limits its practical applications. The examples presented in the paper are relatively simple. Moreover, given the same computational time or with a larger memory footprint -- as the current memory usage is small -- it's not clear whether existing methods could improve their accuracy to the level of the proposed method. Other surface fluid simulation methods, such as [1-2] have showed much more complex demos, but it is unclear that the proposed can achieve a similar demo as well.\\n\\nNevertheless, as the authors have stated, this is the first study to present simulation results of incompressible fluid flow on neural implicit surfaces with a theoretical guarantee of divergence-free behavior, yielding positive results. Moreover, the authors promise to improve the presentation quality of the paper. Therefore, I recommend accepting the paper but suggest it be presented as a poster.\\n\\n[1] A Vortex Particle-on-Mesh Method for Soap Film Simulation, Tao et al. 2024\\n\\n[2] A Moving Eulerian-Lagrangian Particle Method for Thin Film and Foam Simulation, Deng et al. 2023\", \"additional_comments_on_reviewer_discussion\": \"There is a significant disparity among the reviewers' opinions. The reviewers who provided positive feedback recognize the paper's contribution in combining DEC and CPM with neural representation, noting that the proposed method achieves high accuracy. Conversely, Reviewer V8TK who offered negative feedback believe that the proposed method is time -- consuming even on simple examples, leading them to question its practical usefulness. Moreover, Reviewer V8TK thinks the experiments are neither comprehensive nor properly conducted, and V8TK question the technical contribution of the paper. Despite efforts by both the positive reviewers and the authors, the concerns of Reviewer V8TK remain unresolved. Given the current status of the rebuttal and the reviews, this leads to my final decision.\"}", "{\"summary\": \"This paper introduces a novel framework for simulating incompressible Eulerian fluid flow on 3D surfaces using neural implicit representations. This method leverages the Closest Point Method (CPM) and exterior calculus to parameterize the fluid\\u2019s velocity and vorticity fields directly on the surface without relying on discretization, which reduces memory costs and bypasses the need for conventional spatial discretization. The framework introduces a covariant-derivative-based advection process, which integrates surface flow dynamics while minimizing energy dissipation. Notably, this work is among the first to simulate incompressible fluid dynamics on neural surfaces, achieving enhanced accuracy and energy preservation across various geometric representations.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"### CPM Formulation\\nThe math formulation is clean and concise. It is quite apparent that the authors are coming from a graphics background and I love this clean DDG writing style.\\n- The Closest Point Method (CPM) is relatively new in visual computing, yet its integration with neural fields here aligns with my belief in CPM\\u2019s potential for solving PDEs on surfaces. Compared to surface sampling techniques (as seen in Geometry Processing with Neural Fields [Yang et al., 2021] and similar studies), CPM offers a structured way to define differential operators in volumetric data by rigorously establishing value transfer in the ambient space embedding the surface.\\n- A persistent challenge in neural implicit representations is that, while data is represented volumetrically (e.g., through neural SDFs), the actual solutions are constrained to the 0-level isosurface. Sampling on this isosurface can be inefficient, but CPM provides an effective alternative by leveraging the ambient space, enhancing both efficiency and rigor.\\n\\nOverall, I would love to see this line of work being continued and the math formulation should be shared and seen within the ML community.\\n\\n---\", \"some_misc_comments\": [\"The related works section is thoughtfully composed, with necessary references cited and no excess, reflecting high-quality citation practices.\", \"The choice of ground truth in this paper is well-justified and suitable for the presented comparisons.\"], \"weaknesses\": \"I have two concerns, regarding the claimed first and third contributions.\\n\\n---\\n\\n### Performance vs. Storage vs. Accuracy\\nThe storage and accuracy benefits presented as a core contribution appear somewhat overstated since these gains stem from the inherent compact representation of neural fields, as noted in prior works like INSR-PDE (Chen et al., 2023). The neural network, here largely a standard MLP, serves as a model reduction tool or compressed parameter space. However, the substantial cost is slower simulation speeds, particularly noticeable in evolving the PDE on a neural representation, and this tradeoff is well-documented in the field, tracing back to foundational work like *Geometry Processing with Neural Fields* (Yang et al., 2021). Additionally, working with surface PDEs inherently mitigates spatial complexity compared to volumetric Eulerian approaches, further diluting the impact of memory savings in this context. Unless optimized network designs or implementation techniques were used, this contribution may feel more like a tradeoff typical of neural fields than a novel improvement.\\n\\n**TL;DR:** Without unique implementation optimizations, this tradeoff doesn\\u2019t stand out as an independent contribution, as neural networks naturally offer compact representations at the expense of computational speed.\\n\\n---\\n\\n### First to Simulate on Neural Implicit Surface Representation\\nThe claim of being the first to simulate incompressible fluid flow on neural implicit surfaces is somewhat uncertain, as prior work using sampling techniques, like *Geometry Processing with Neural Fields* (Yang et al., 2021) or INSR-PDE, could also solve surface PDE like Laplace Equation by sampling on the surface. While it\\u2019s conceivable that these methods struggle with incompressibility when applied to Navier-Stokes, demonstrating their limitations would highlight the advantages of the Closest Point Method (CPM) for ensuring divergence-free constraints on neural surfaces. Including such comparative results, even as failure cases, could effectively underscore this paper\\u2019s unique approach.\", \"questions\": \"### Questions\\n1. **Use of DEC Language**: The paper\\u2019s use of Discrete Exterior Calculus (DEC) is rigorous and suits the formal approach taken. However, many in the ML and physics communities might be more accustomed to traditional differential or vector calculus, so DEC may require more adjustment for those readers. Adding intuitive explanations alongside the DEC formalism could enhance accessibility, although this may vary depending on the preferences of other reviewers.\\n2. **Handling Narrow Geometric Features in CPM**: The reliance on ambient space in CPM may lead to ambiguities when processing narrow or thin features. Clarifying whether this dependency impacts stability or accuracy for such geometries would enhance the framework\\u2019s applicability and inform potential adaptations to handle such cases.\\n---\\n### Suggestions\\n1. **Missing Citations**\\n 1. For by construction divergence-free field with neural network, maybe also cite [Deep Fluids](https://onlinelibrary.wiley.com/doi/10.1111/cgf.13619).\\n2. **Clarifying Performance Gains Over INSR**:Intuitive explanation of why your method is > INSR > PINN when constrained by storage size. Intuitively, INSR is superior to PINN because it doesn\\u2019t record time in the neural field, so, given the same storage budget, INSR should and must outperform PINN. However, your method doesn\\u2019t gain from saving less information in the neural field to achieve higher accuracy (i.e., it doesn\\u2019t concentrate model expressiveness on specific features to achieve this). So, what is the intuitive reason behind your method\\u2019s improved results over INSR? Is it due to the CPM formulation or the Helmholtz decomposition? An \\u201cablation\\u201d would be helpful here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Reference**:\\n\\n[1] Chen H, Wu R, Grinspun E, et al. Implicit neural spatial representations for time-dependent pdes//International Conference on Machine Learning. PMLR, 2023: 5162-5177.\\n\\n[2]. King N, Su H, Aanjaneya M, et al. A Closest Point Method for PDEs on Manifolds with Interior Boundary Conditions for Geometry Processing[J]. ACM Transactions on Graphics, 2024.\\n\\n[3]. Marz T, Macdonald C B. Calculus on surfaces with general closest point functions[J]. SIAM Journal on Numerical Analysis, 2012, 50(6): 3303-3328.\\n\\n[4]. Aamari E, Kim J, Chazal F, et al. Estimating the reach of a manifold[J]. 2019.\"}", "{\"comment\": \"Thank you for the reply. I appreciate the efforts to include the presentation changes. Following reviewer Y5ci, I also respectfully disagree with reviewer V8TK for the same reasons. This paper is not only a sound technical advance, but also a very good example of proper math writing in a machine learning paper, which is somewhat difficult to find among the numerous submissions. The insights about links between exterior calculus and riemannian geometry may also be very valuable for graduate students and researchers in numerous contexts. I will champion this paper unless a major flaw is identified afterwards.\"}", "{\"comment\": \"We want to clarify that our goal is not to propose a general neural-based solver with streamfunctions for all flow construction and energy preservation in 2D/3D simulations. Our focus is specifically on embedded surface flow. While **time consumption** is noted as a potential limitation (similar to the time in Chen et al.), it can be mitigated through a hybrid design or more efficient representation within our framework.\\n\\n**Presentation** has been revised according to the suggestions from Reviewers Y5Ci and dCvp. **Evaluation** might be influenced by the non-fully convergence of the network (but still better than other baseline methods). In terms of **contribution**, while the curl of the streamfunction is not news, estimating it on a surface using a continuous functional (neural representation) could be of interest.\\n\\nFor the **related work** section, we have included works for simulations on surfaces, which align with our previous statements, but not all the fluid simulation frameworks. Regarding boundary conditions for stream functions, our simulations are primarily on closed surfaces, and this issue does not arise. More complex surfaces (open surfaces) are discussed in Appendix F.2 as a limitation and a future research direction.\", \"the_specfic_problems\": \"> Q1: There are no examples that support the claim that this paper improves previous approaches.\\n\\n> **R1:** At least, for the functional fluid on the surface (which is a classic solver), we can improve on its robustness on mesh quality as showed in Appendix. E.4. We admit the classic solver benefit much from the speed, but more efforts need to be taken on its surface implementation (including the estimation of the surface differential form) and no guarantee for the robustness of the methods on the surfaces.\\n\\n> Q2: Covector fluids would work on a surface mesh, they have 2D examples. That's the beauty of DEC: it really abstracts the domain representation, one has only to implement the exterior derivative properly (which is easy). I suppose that Covector Fluids did not show examples on a surface because of the limited application of such flows.\\n\\n> **R2:** It is not exactly the same case for 2D and 2D surface embeded in 3D. The implementation for covector fluid employ the staggered grid but not mesh for the operator estimation. Actually, accurately estimating the differential form is not a direct consequence and can introduce possible instability on the low-quality mesh (like statements in Li et al.'s CPM poster).\\n\\n> Q3: CPM is not necessary to identify the streamfunction as a differential form.\\n\\n> **R3:** The differential form for the streamfunction does actually not need CPM but its estimation, especially continously, needs CPM (Li et al's) as a support.\\n\\n> Q4: Problems for impulse-based formulation.\\n\\n> **R4:** In our reply, we primarily address your question regarding **traditional operator splitting** on surfaces. We acknowledge that impulse-based method offers significant improvements in energy preservation and efficiency. But our main focus is the simulation on the surface with different representations. The discussion about impluse-based method in the scope needs further work on implementing differential operators and impulse gauge variables on surfaces. Actually, we hope our method to serve as a start point for integrating differential form, CPM and neural functionals (as stated in Reviewer Y5ci) via stream-function. Exploring impulse-based methods with neural representations presents an exciting future direction.\\n\\n> Q5: Storage Problem\\n\\n> **R5:** We benefit from the compact representation for the storage consumption (as noted by Reviewer Y5ci and Chen et al.). Storage can be a challenge because, even with meshes, high-resolution texture maps are needed to store simulation variables, which can be costly. In our paper, we actually aim to construct a continuous representation of surface flow, where compactness naturally reduces consumption. Several CG papers, such as those by Anzenot et al. and Elcott et al., focus on surface flow, and we follow their lead for surface visual effects. Our methods can be further optimized for time efficiency and applied into the practical scenes.\"}", "{\"comment\": \"Thank you for your kind support and appreciation. We\\u2019re glad you found the technical and mathematical aspects valuable. We greatly appreciately your willingness to champion our work.\"}", "{\"comment\": \"We deeply appreciate the reviewer's enthusiastic and high evaluation of our work, as reflected in the rare 'Strong Accept' rating. Your thoughtful and detailed feedback has greatly encouraged us, and we are genuinely grateful for your recognition. Your comments on the paper's writing are especially valuable and will greatly assist us in presenting our work more effectively.\\n\\n> **Q1:** An image depicting equation (4) and another showing the advection process would greatly improve the friendliness of the paper since both processes are very geometric.\\n\\n**R1:** We have include both figures in the revised version of the paper in Sec. 3.2 and 4.1.\\n\\n> **Q2:**\\n> The presentation could be more friendly by giving some intuition along the text. I will point some places I think this kind of intuition would be beneficial.\\n\\n**R2:** We will incorporate these refinements in the revised manuscript as follows:\\n\\n1. Line 199: Add the description form of the divergence function.\\n2. Line 299: Include the reference image.\\n3. Line 234: Add a description of the vorticity.\\n4. Line 320: Include a reference image for the covariant derivative.\\n5. Line 327: Relate the inner product to the first order approximation.\\n6. Line 344: Provide an explanation for harmonic component modeling and refer to Appendix F.1 for further discussion.\\n\\nWe sincerely appreciate your efforts to help us improve the clarity and presentation of our work.\\n\\n\\n> **Q3:** This paper deserves an acronym so it may be more easily referenced in the future by other researchers. I advise the authors to think about changing the title to include a creative acronym.\\n\\n**R3:** We believe NFFS (Neural Functional Flow on Surface) is sufficient and we will include it in our revised manuscript.\\n\\n> **Q4:** Why introducing $f$ in equation (8) instead of using $\\\\omega$ directly?\\n\\n**R4:** Yes, I think we can directly use $\\\\omega$.\"}", "{\"comment\": \"_\\u201cStorage Problem. Regarding the storage concern, we acknowledge that one-step computation is often sufficient for fast local result previews. However, for studies involving long-term effects or rendering on arbitrary paths (such as in scientific research or visual effects), storage becomes a significant consideration. Additionally, for scenarios involving data sharing or long-term analysis, storage efficiency cannot be overlooked. Our model addresses these challenges by supporting continuous input and enabling to produce very high-resolution results. For comparison in the content, we believe on the current resolution is adequate to show the effectiveness and robustness of our results while higher resolution simulation for the classic method will suffer from storage and robustness problem.\\u201d_\\n\\nStorage can be a smaller problem for visual effects in the case of volumetric simulations, but I don't see the point of a method that reduces the storage for 2D simulations embedded on a 3D manifold. The performance overhead that comes with the proposed method, along with its own computational complexity simply does not justify its usage. Moreover, I have never seen someone using surface-only fluid simulations to do even a single shot in a movie. So in my point of view, its a stretch to imply that this is going to be useful for visual effects, and the practical usability of such an approach in this context.\"}", "{\"summary\": \"This paper builds on the recently introduced Closest Point Exterior Calculus (CP-EC) to propose a novel method for preserving the divergence-free property of vector fields on surfaces. By leveraging the closest point map, this approach seamlessly extends computations from the surface to the surrounding Euclidean space. At the core of the paper, Theorem 3.1 presents a specific construction for generating a divergence-free vector field on a surface using the CP-EC framework. This framework enables the calculation of gradient, divergence, and curl in a way that respects the intrinsic geometry of the surface, ensuring that the velocity field remains divergence-free when constrained to the surface. A key advantage of this method is its flexibility, as it supports simulations on various surface representations, including analytic surfaces, explicitly defined mesh surfaces, and, notably, neural implicit surfaces. The paper introduces a complementary advection process based on covariant derivatives for fluid dynamics, designed to minimize energy dissipation. Numerical studies confirm the framework\\u2019s accuracy, energy preservation, memory efficiency, and adaptability to geometry. Results show it achieves about 15 times higher accuracy than other methods with similar storage, offers 5 times memory savings over classic methods, and effectively models fluid dynamics. Additionally, the simulator's robustness is demonstrated through an end-to-end generation task and a real-world velocity field decomposition.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper shows that the recently introduced Closest Point Exterior Calculus (CP-EC) is very well suited to simulate fluid simulation on neural implicitly defined surfaces in 3D. The CP-EC allows to automatically guarantee the divergence-free properties of the vector field. The method achieves up to 15 times higher accuracy than previously used discretization methods on the surface with the same memory requirements, which is confirmed by extensive numerical simulations of different applications.\", \"weaknesses\": \"The English in the current version of the paper needs to be improved. Numerous articles are missing and sometimes the wrong words are used (subtle instead of subleties, divergence free instead of divergence free property, etc).\\n\\nCompared to the actual straigth forward application of the CP-EC to the case of flow simulation on surfaces, the paper seems cumbersomely long and is also not as clear to read as the recent papers on the topic referenced in the paper, whose presentation is clearer and more concise. Maybe the authors can try to improve on that.\", \"questions\": \"How do you do the interpolation of the pulled forms? In the CP-EC poster the authors recommended the Cubic Lagrangian.\\nHow does this interpolation affect the divergence-free property of the velocity field? Were there any numerical problems?\\nIs it possible to do an ablation study on this point?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the suggestion. To addreses your questions and concerns:\\n\\n\\n> **Q1:** Poor exposition along with several typos.\\n\\n**R1:** For any writing issues are present, we assure that we will conduct a thorough proof reading and optimize the structure and descriptions throughout the paper. (L11: revised ``Incompressible Euler fluid`` to ``Incompressible fluid``;\", \"l20\": \"corrected to ``We propose a neural physical simulation framework on the surface with the implicit neural representation``; L240 and others: rephased to improve transitions and ensure smoother readability).\\n\\n> **Q2:** The main idea of the paper is based on wrong assumptions ... since a solver usually has to store a single time-step of the represented variables for advancing the simulation state. The presented results also show very modest resolutions.\\n\\n**R2:** \\n1. Motivation Clarification. To clarify, we do not attribute the independence from mesh quality solely to the implicit neural \\nrepresentation. In fact, there exist many methods that do not incorporate the Closest Point Method face challenges related to mesh quality, and our goal is to improve these approaches. Even when using the Closest Point Method with a discretized grid, discretization errors can still occur in operator calculations. By leveraging neural parameterization, we achieve analytically accurate operator computation, which enhances robustness and addresses these challenges more effectively.\\n\\n 2. Storage Problem. Regarding the storage concern, we acknowledge that one-step computation is often sufficient for fast local result previews. However, for studies involving long-term effects or rendering on arbitrary paths (such as in scientific research or visual effects), storage becomes a significant consideration. Additionally, for scenarios involving data sharing or long-term analysis, storage efficiency cannot be overlooked. Our model addresses these challenges by supporting continuous input and enabling to produce very high-resolution results. For comparison in the content, we believe on the current resolution is adequate to show the effectiveness and robustness of our results while higher resolution simulation for the classic method will suffer from storage and robustness problem.\\n\\n> **Q3:** There are missing references and/or previous methods are not thoroughly considered ... modern structure preserving solver accurately advect velocities without major stability issues.\\n\\n**R3:** The paper referred here is mainly designed to deal with the numerical dissaption caused by the adveciton and projection. But none of them are designed for the surface flow (even not with a surface flow demo). While their methods could theoretically be extended to surface flows, the additional operators they propose would require careful adaptation and design for surface-specific contexts, which is beyond the scope of this work. To be more specific, Covector Fluid is relatively suitable for surface flows, but it still needs to specify and compute the co-vector form directly on the surface. Methods like Fluid Simulation on Neural Flow Maps, Eulerian-Lagrangian Fluid Simulation on Particle Flow Maps, and Impulse Particle In Cell rely on flow map forms, which could introduce possibility of instability due to surface operator estimation error when applied to surfaces. Lagrangian Covector Fluid with Free Surface differs slightly due to its use of power graph discretization, which also presents extra challenges for extension to surfaces.\\n\\nRegarding Elcott et al. 2007 [5], the computation of flow lines on triangle meshes is known to introduce numerical instability, as discussed in Anzenot et al. 2014 [6]. Although Impulse Particle In Cell shows potential to solve the problem on the surface but still needs further verfication, which is beyond our scope of the paper.\\n\\n> **Q4:** The paper partially focuses on showing mathematical proofs that are known by the exterior calculus community ... move the lengthy mathematical descriptions to the Appendix.\\n\\n**R4:** We believe this is not a direct consequence of solely the divergence-free field construction using exterior calculus and discrete differential geometry as cited. Our approach requires the use of an additional tool based on the Closest Point Method (CPM), as described in [7], to theoretically identify the stream function and other necessary differential forms. Our theoretical formulation provides a more rigorous construction of the \\\"surface curl\\\" and enables the application of neural parametric functions for surface vector field dynamics. To assist readers who may not be familiar with these techniques, we aim to include a comprehensive sketch in the appendix. We believe this addition will provide valuable insights and improve the accessibility of our methodology.\"}", "{\"comment\": \"We sincerely thank the reviewer for the enthusiastic and exceptionally positive evaluation of our work. Your insightful and detailed feedback has been deeply encouraging, and we are truly grateful for your recognition and support. Your comments are highly constructive and will help us further clarify the contributions of our work.\\n\\n> **Q1:** Clarfications on **Performance vs. Storage vs. Accuracy**. \\n\\n**R1:** \\nIt is correct. The trade-off is effectively described in INSR-PDE, highlighting the advantages of compact neural representations. We will clarify this point in our contributions by explicitly mentioning the power of neural networks (as also referenced in Chen et al. [1]). We believe it is important to report this aspect and improvement though it is attributed to compact neural represnetation, as surface scenarios have not yet been simulated using the implicit neural representations (INR).\\n\\n\\n> **Q2:** Clarfications on **First to Simulate on Neural Implicit Surface**\\n\\n**R2:** \\nThat is correct. First, we will restate our contribution as: ``the first study to present simulation results of incompressible fluid flow on neural implicit surfaces with a guarantee of divergence-free behavior.``\\n\\nAdditionally, we have included more results in Appendix E.3 to further illustrate the effectiveness of our divergence-free functional. These results also demonstrate its utility in improving the surface sampling method for handling incompressibility on the analytic surface. However, for non-analytic surfaces, designing a divergence-free functional without CPM is non-trivial. Consequently, the surface sampling method would fail without a proper divergence-free design by CPM similar as the failure in analytic surfaces as shown in the Appendix E.3.\\n\\n\\n\\n> **Q3:** **Use of DEC Language**\\n\\n**R3:** Thanks for your advice. We follow the suggestion of Reviewer dCvp and add more illustrations to make it more intuitive for broader readers.\\n\\n\\n> **Q4:** **Handling Narrow Geometric Features in CPM**\\n\\n**R4:** Yes, narrow or thin features can introduce ambiguity. It will result in \\ninefficiency, inaccuracy, and energy dissipation due to increased difficulties in convergence since the network will be misleaded by the non-unique closet point mapping. To mitigate this issue, as mentioned in Appendix F.1, we can follow the approaches of (King et al. 2023 [2], Marz and Macdonald 2012 [3]) and sample the neighborhood within a smaller tube radius (closer to the surface). In parctice,\\nwe can determine the sampling distance threshold to the surface as less than $\\\\Delta x$ in the equation in Section 8, guided by the estimation of the reach distance (Aamari et al. 2019 [4]). This estimation can be locally constructed using a simple mesh extractor from the signed distance function (SDF), if no mesh is available, to detect thin regions roughly and adaptively adjust the sampling distance. This adaptive approach will improve efficiency and convergence and we will include more details about adaptations in Appendix F.1.\\n\\n\\n> **Q5:** **Missing Citations** \\n\\n**R5:** Thank you for pointing this out. We will include it in the related work section under the part ``Physical Simulation based on Neural Network``.\\n\\n> **Q6:** **Clarifying Performance Gains Over INSR**\\n\\n**R6:** Actually, we believe the improvement is primarily attributed to the Helmholtz decomposition. \\nIn the comparison between INSR and our method on the sphere jet, INSR is implemented using spherical coordinates with a fixed radius. We adopt two parameters ($\\\\theta$, $\\\\phi$) as input to the network, enabling the analytical verification of surface divergence in spherical coordinates without relying on CPM. Alternatively, this could also be imagined as maintaining a perfect CPM in 3D, where the value at each 3D point is taken as the projection onto the sphere. However, even under the idealized conditions, the results still exhibit significant error.\\nIf an inperfect CPM is included, additional errors in operator estimation would further worsen the results. Furthermore, in Appendix E.3, we adapt our divergence-free parameterization design to a surface-sampling Eigen-Net (using Spherical Harmonics to estimate the Laplacian-Beltrami operator analytically), and the results show improvement.\\n\\nNevertheless, we believe that CPM remains a critical component for handling arbitrary surfaces (especially non-analytical surfaces). It not only serves as a theoretical tool for constructing continuous differential forms to ensure divergence-free properties on arbitrary surfaces but also as a practical approach to enhances sampling efficiency by enabling uniform sampling in the ambient space, rather than directly on the iso-surface.\"}" ] }
58T7xcTxJD
Dual-level Affinity Induced Embedding-free Multi-view Clustering with Joint-alignment
[ "Shengju Yu", "Zhibin Dong", "Siwei Wang", "Suyuan Liu", "KE LIANG", "Xinwang Liu", "Naiyang Guan", "Tiejun Li", "Yiu-ming Cheung" ]
Despite remarkable progress, there still exist several limitations in current multi-view clustering (MVC) techniques. Specially, they generally focus only on the affinity relationship between anchors and samples, while overlooking that between anchors. Moreover, due to the lack of data labels, the cluster order is inconsistent across views and accordingly anchors encounter misalignment issue, which will confuse the graph structure and disorganize cluster representation. Even worse, it typically brings variance during forming embedding, degenerating the stability of clustering results. In response to these concerns, in the paper we propose a MVC approach named DLA-EF-JA. Concretely, we explicitly exploit the geometric properties between anchors via self-expression learning skill, and utilize topology learning strategy to feed captured anchor-anchor features into anchor-sample graph so as to explore the manifold structure hidden within samples more adequately. To reduce the misalignment risk, we introduce a permutation mechanism for each view to jointly rearrange anchors according to respective view characteristics. Besides not involving selecting the baseline view, it also can coordinate with anchors in the unified framework and thereby facilitate the learning of anchors. Further, rather than forming embedding and then performing spectral partitioning, based on the criterion that samples and clusters should be hard assignment, we manage to construct the cluster labels directly from original samples using the binary strategy, not only preserving the data diversity but avoiding variance. Experiments on multiple publicly available datasets confirm the effectiveness of our DLA-EF-JA.
[ "Mulit-view Clustering", "Large-scale Clustering", "Anchor Clustering" ]
Reject
https://openreview.net/pdf?id=58T7xcTxJD
https://openreview.net/forum?id=58T7xcTxJD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNUJ9WeMIK", "rPfoMxf1Hm", "qveGHurQGP", "p8P0KRCN2C", "lz8ACzwouB", "l9J3hC6Y1w", "jdbGZhBAmD", "jQRTsaWJvX", "gqGkIM3oxk", "g4chaEhrLu", "ZNqSnbSZTc", "Z7tOu079Jy", "Z7hCjSBmwN", "Xj0pacQ0qK", "VQBmqNKKrn", "ILsy3NpTpt", "C1PsiXHDdb", "BMeDPCgikD", "8xlOh5aOPC", "7TcEiozxPA", "6F9HpFYpUF", "5xfI5a8A4h", "4QDRe7Sg5T", "1ioK52MFbe", "05fO0IYHVu" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision" ], "note_created": [ 1730038082163, 1732351077773, 1732693658950, 1732353128795, 1732344886830, 1732351578669, 1732352406940, 1732515892162, 1732693598717, 1732350838240, 1732353171892, 1732693741102, 1730380207910, 1730530683550, 1732343500883, 1732347411877, 1732693538527, 1730716869268, 1732342968588, 1732351026772, 1732346762646, 1732353746701, 1732352470715, 1734612386880, 1737523384083 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission195/Reviewer_zrPV" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Reviewer_2FNR" ], [ "ICLR.cc/2025/Conference/Submission195/Reviewer_4XPe" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Reviewer_Ccq1" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Authors" ], [ "ICLR.cc/2025/Conference/Submission195/Area_Chair_eztt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a dual-level affinity induced embedding-free multi-view clustering method with joint alignment, called DLA-EF-JA. Based on previous anchor based multi-view clustering, it further considers the relations among anchors by learning an affinity matrix that are used to guide the anchor matrix learning with graph Laplacian. The multi-view anchors are adaptively aligned. The discrete cluster indicator is also jointly learned.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper is easy to read.\\n2.\\tExtensive experiments are conducted to show the effectiveness of the method as well as efficiency.\", \"weaknesses\": \"1.\\tThe novelty of this paper is incremental. The authors consider the relations among samples by self-expression affinity learning, and add a graph based Laplacian for anchor matrix regularization. However, the self-expression affinity learning and graph based Laplacian are widely used in existing subspace clustering works. It also remains unclear why the anchor self-expression enhances the quality of anchors.\\n\\n2.\\tWhy learn an anchor affinity matrix $S_p$ for each view separately? It seems to overlook inter-view interactions. Why not directly learn a consensus anchor affinity matrix? Will it improve the performance?\\n\\n3.\\tHow do you set the number of anchors $k$? What is the influence of it?\\n\\n4.\\tThe experimental results are not convincing. For instance, OrthNTF achieves 69.4% Acc and 68.6% NMI values on the Reuters dataset, while this paper only reports 28.67% Acc and 3.07% NMI.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 4XPe (Part-4)\", \"comment\": [\"**Q4:** The model\\u2019s performance is sensitive to carefully tuned hyperparameters, such as $\\\\lambda$ and $\\\\beta$. Can the authors provide further insights into the potential effects of anchor noise and how it could be mitigated to improve robustness?\", \"**A4:** Thanks. \\tThese parameters control critical trade-offs in the model. $\\\\lambda$ governs the balance between reconstruction loss and anchor self-expression regularization, while $\\\\beta$ influences inter-view consistency. Mis-tuning these parameters could lead to sub-optimal performance or instability, especially when noisy anchors are introduced.\", \"------\", \"Anchor noise could bring the following potential effects,\", \"it might induce the model to capture the noise patterns instead of the underlying true relationships in the data, leading to over-fitting.\", \"it might cause the loss surface to become irregular, complicating optimization and making the choice of hyper-parameters $\\\\lambda$ and $\\\\beta$ even more crucial.\", \"it might bias the model towards spurious correlations, reducing the ability to generalize to unseen data.\", \"it might distort the guidance provided to the model, leading to incorrect representations in subsequent optimization.\", \"-------\", \"To mitigate anchor noise, some possible schemes are as follows,\", \"introduce a pre-filtering mechanism or utilize confidence-based thresholds to exclude potentially noisy anchors during learning.\", \"adopt multiple anchor sets from different initialization or sampling strategies and combine their outputs to mitigate individual noise effects.\", \"carefully pre-process the datasets to identify and remove instances of anchor noise using the outlier detection techniques.\", \"incorporate some prior knowledge to adaptively tune $\\\\lambda$ and $\\\\beta$ during learning based on the noise levels.\"]}", "{\"title\": \"Thanks for Reviewer 2FNR\", \"comment\": \"Dear reviewer 2FNR,\\n\\nWe greatly value your insightful and constructive feedback. We hope that our response has addressed your concerns. If you have any further suggestions or questions, please do not hesitate to share them. We are more than willing to discuss them with you.\\n\\nWe fully understand that you are extremely busy, so would greatly appreciate your time in this process.\\n\\nBest wishes,\\n\\nThe authors of 195\"}", "{\"title\": \"Response to Reviewer zrPV(Part-1)\", \"comment\": \"We very thank Reviewer zrPV's insightful feedback and guidance for the revision of this manuscript. All concerns have been carefully responded point by point. We sincerely hope these issues have been cleared.\\n\\n\\n**Q1:** \\tThe self-expression affinity learning and graph based Laplacian are widely used in existing subspace clustering works. It remains unclear why the anchor self-expression enhances the quality of anchors. \\n\\n**A1:** Thanks. The self-expression affinity learning is utilized to construct the sample-sample affinity with full size in subspace clustering. Inspired by this, we explicitly extract the global structure between anchors via self-expression learning, and meanwhile feed that into anchor-sample so as to better \\texploit the manifold characteristics hidden within samples. (Kindly note that in this work, we did not calculate the **sample-sample** relations through self-expression learning.) In addition to this, our work also designs a joint-alignment mechanism which does not involve the selection of the baseline view and meanwhile can cooperate with the learning of anchors. Moreover, a solving scheme with linear complexity enables our framework to effectively tackle MVC tasks. \\n\\n-------\\n\\t \\nAnchor self-expression learning can help extract the geometric characteristics between anchors, and meanwhile facilities the learning of anchors owing to the joint-optimization mechanism. To validate this point, we organize four groups of ablation experiments, i.e., No self-expression + No leanring (NSNL), No self-expression + Having learning (NSHL), Having self-expression + No learning (HSNL), Having self-expression + Having learning (HSHL, i.e., Ours). The comparison results are reported in the following table.\\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| NSNL | 60.42 | 43.88 | 27.96 | 14.33 | 25.32 | 19.73 | 41.27 |\\n| NSHL | 71.51 | 49.05 | 30.35 | 16.75 | 47.05 | 26.69 | 52.15 |\\n| HSNL | 65.64 | 64.59 | 30.24 | 16.68 | 27.20 | 24.08 | 47.21 |\\n| HSHL | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| NSNL | 64.37 | 35.68 | 5.88 | 1.01 | 1.38 | 12.57 | 44.79 |\\n| NSHL | 83.97 | 40.21 | 6.02 | 2.53 | 23.19 | 15.48 | 58.13 |\\n| HSNL | 69.84 | 37.95 | 33.54 | 1.06 | 1.43 | 12.98 | 47.07 |\\n| HSHL | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| NSNL | 63.76 | 48.43 | 28.79 | 23.42 | 33.87 | 16.86 | 37.64 |\\n| NSHL | 73.79 | 51.25 | 30.42 | 28.54 | 43.04 | 17.70 | 46.77 |\\n| HSNL | 69.33 | 61.54 | 30.40 | 24.43 | 35.25 | 18.03 | 41.43 |\\n| HSHL | **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\nAs seen, HSNL is consistently preferable than NSNL, and HSHL is consistently preferable than NSHL. These demonstrate that the anchor self-expression can facilitate the clustering performance improvement. Additionally, NSHL is consistently preferable than NSNL, and HSHL is consistently preferable than HSNL. These illustrate that the anchor learning can help increase the clustering results. Therefore, we can conclude that the anchor self-expression learning can enhance the quality of anchors to increase the clustering results.\"}", "{\"title\": \"Response to Reviewer Ccq1 (Part-3)\", \"comment\": \"**Q4:** Table 5 does not include all the symbols. The Methodology section might be too brief, which should be introduced with more details by explaining the reasons for the design of each component.\\n\\n**A4(1):** Thanks. We have updated Table 5, please check it. \\n\\n The symbols in this manuscript are as follows, \\n\\n| Symbol | Meaning |\\n|---|---|\\n| $n$ | the number of samples |\\n| $m$ | the number of anchors |\\n| $v$ | the number of views |\\n| $k$ | the number of clusters |\\n| $d_p$ | the data dimension on view $p$ |\\n| $\\\\mathbf{X}_p \\\\in \\\\mathbb{R}^{d_p \\\\times n}$ | the data matrix on view $p$ |\\n| $\\\\mathbf{A}_p \\\\in \\\\mathbb{R}^{d_p \\\\times m}$ | the anchor matrix on view $p$ |\\n| $\\\\mathbf{T}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | the permutation matrix on view $p$ |\\n| $\\\\mathbf{B}_p \\\\in \\\\mathbb{R}^{m \\\\times k}$ | the basic coefficient matrix on view $p$ |\\n| $\\\\mathbf{C} \\\\in \\\\mathbb{R}^{k \\\\times n}$ | the cluster indicator matrix |\\n| $\\\\mathbf{S}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | the anchor self-expression matrix on view $p$ |\\n| $\\\\mathbf{D}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | the degree matrix of $\\\\mathbf{S}_p$ on view $p$ |\\n| $\\\\boldsymbol{\\\\alpha} \\\\in \\\\mathbb{R}^{v \\\\times 1}$ | the view weighting vector |\\n| $\\\\mathbf{Z}_p \\\\in \\\\mathbb{R}^{m \\\\times n}$ | the anchor graph on view $p$ |\\n| $\\\\mathbf{L_s} \\\\in \\\\mathbb{R}^{m \\\\times m}$ | the Laplacian matrix about $\\\\mathbf{S}_p$ |\\n| $\\\\mathbf{E}_p \\\\in \\\\mathbb{R}^{m \\\\times n}$ | $\\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C}$ |\\n| $\\\\mathbf{F}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{T} _{p} - \\\\mathbf{T} _{p} \\\\mathbf{S} _{p}$ |\\n| $\\\\mathbf{G}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{A}_p^{\\\\top} \\\\mathbf{A}_p$ |\\n| $\\\\mathbf{H}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{S}_p \\\\mathbf{S}_p^{\\\\top}$ |\\n| $ \\\\mathbf{M}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{B} _{p} \\\\mathbf{C} \\\\mathbf{C}^{\\\\top} \\\\mathbf{B} _{p}^{\\\\top}$ |\\n| $\\\\mathbf{J}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{A}_p^{\\\\top} \\\\mathbf{X}_p \\\\mathbf{C}^{\\\\top} \\\\mathbf{B}_p^{\\\\top}$ |\\n| $\\\\mathbf{Q}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$ | $\\\\mathbf{T} _{p}^{\\\\top}\\\\mathbf{A} _{p}^{\\\\top} \\\\mathbf{A} _{p} \\\\mathbf{T} _{p}$ |\\n| $\\\\mathbf{Z} \\\\in \\\\mathbb{R}^{n \\\\times k}$ | $2\\\\sum _{p=1}^v \\\\boldsymbol{\\\\alpha} _p^2 \\\\mathbf{X} _p^{\\\\top} \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p}$ |\\n| $\\\\mathbf{W} \\\\in \\\\mathbb{R}^{k \\\\times k}$ | $\\\\sum _{p=1}^v \\\\boldsymbol{\\\\alpha} _p^2 \\\\mathbf{B} _{p}^{\\\\top} \\\\mathbf{T} _{p}^{\\\\top} \\\\mathbf{A} _{p}^{\\\\top} \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} + \\\\beta \\\\mathbf{B} _p^{\\\\top} \\\\mathbf{L_s} \\\\mathbf{B} _p $ |\\n\\n\\n\\nFor the methodology, we here provide more details to explain the reasons for the design of each component. \\n\\t\\nFirst of all, to exploit the geometric characteristics between anchors, inspired by the concept of subspace reconstruction, we introduce self-expression learning for anchors. Specially, we utilize the paradigm $\\\\left\\\\|| \\\\mathbf{A}_p -\\\\mathbf{A}_p \\\\mathbf{S}_p \\\\right\\\\||_F^2 $ to explicitly extract the global structure between anchors. \\n\\n \\nAfter obtaining the anchor-anchor characteristic $\\\\mathbf{S}_p \\\\in \\\\mathbb{R}^{m \\\\times m}$, we need to integrate that into anchor-sample so as to exploit the manifold features inside samples. To this end, we adopt the idea of point-point guidance to adjust the anchor graph. Note that the rows of anchor graph $\\\\mathbf{Z}_p \\\\in \\\\mathbb{R}^{m \\\\times n}$ correspond to anchors, and thus we utilize the element $[\\\\mathbf{S} _p] _{i,j}$ to guide $[\\\\mathbf{Z} _p] _{i,t}$ and $[\\\\mathbf{Z} _p] _{j,t}$, \\n $t=1, \\\\cdots n$, which can be formulated as $\\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{Z} _p] _{i,:} - [\\\\mathbf{Z} _p] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j}$ and aims at restricting similar features to maintain the consistency. \\n\\n\\nThen, to alleviate the anchor misalignment, considering that the nature of misalignment is that the order of anchors on different views is not identical, we alleviate the misalignment issue by rearranging anchors. Particularly, we associate each view with a learnable permutation matrix $\\\\mathbf{T} _p$ to freely transform anchors in the original dimension space according to the characteristics of respective view. In addition to not involving selecting the baseline view, our mechanism also can coordinate with anchors in the unified framework and thereby facilitates the learning of anchors. Correspondingly, the anchor matrix $\\\\mathbf{A}_p$ is reformulated as $\\\\mathbf{A}_p \\\\mathbf{T}_p$. The self-expression term $\\\\left\\\\|| \\\\mathbf{A} _p -\\\\mathbf{A} _p \\\\mathbf{S} _p \\\\right\\\\||_F^2 $ and the reconstruction term $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{Z} _{p} \\\\right\\\\||_F^2 $ are reformulated as $\\\\left\\\\|| \\\\mathbf{A} _p \\\\mathbf{T} _p -\\\\mathbf{A} _p \\\\mathbf{T} _p \\\\mathbf{S} _p \\\\right\\\\|| _F^2 $ and $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{Z} _{p} \\\\right\\\\|| _F^2 $, respectively.\"}", "{\"title\": \"Response to Reviewer 2FNR(Part-1)\", \"comment\": \"We very thank Reviewer 2FNR's profound comments and guidance for the revision of this manuscript. All concerns have been carefully responded point by point. We sincerely hope these issues have been cleared.\\n\\n**Q1:** The novelty of this work is limited since the involved components have been widely used for anchor learning and spectral clustering. \\n\\n**A1(1):** Thanks. We emphasize that current alignment strategies typically require selecting the baseline view, and also are separated from the anchor generation. Different from them, we learn to align based on the characteristics of respective view itself and meanwhile jointly conduct anchor generation and anchor alignment, which makes them able to negotiate with each other. Besides, we explicitly take into account the geometric properties between (aligned) anchors, and feed that into the anchor-sample affinity to extract the manifold structure hidden within original samples more adequately. Especially, we also give a feasible solving scheme with linear complexity to optimize the resulting objective. \\n\\n\\n\\n\\nTo demonstrate the strengths of our alignment strategy, \\nwe organize the experiments compared to some remarkable alignment algorithms like FMVACC [1], 3AMVC [2], AEVC [3]. \\n\\n\\nFMVACC utilizes feature information and structure information of the bipartite graph generated by fixed anchors to build the matching relationship, and regards the first view of each dataset as the baseline view. \\n\\n3AMVC gets rid of prior knowledge by identifying and selecting discriminative anchors within a single view using hierarchical searching, and takes the view exhibiting the highest anchor graph quality as the baseline view. \\n\\nAEVC narrows the spatial distribution of anchors on similar views by leveraging the inter-view correlations to enhance the expression ability of anchors, and treats the view concatenated by column as the baseline view. \\n\\nThe comparison results are shown in the following table. \\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|---|---|---|---|---|---|---|\\n| ACC(\\\\%) | | | | | | | |\\n| FMVACC | 74.13($\\\\pm$3.36) | 53.34($\\\\pm$2.84) | 47.10($\\\\pm$4.07) | 22.95($\\\\pm$0.89) | 42.31($\\\\pm$3.17) | 25.58($\\\\pm$0.66) | 55.44($\\\\pm$2.25) |\\n| 3AMVC | 62.60($\\\\pm$5.50) | 38.17($\\\\pm$3.58) | 44.73($\\\\pm$3.74) | **33.31($\\\\pm$1.19)** | 49.56($\\\\pm$2.45) | 26.38($\\\\pm$0.93) | 57.24($\\\\pm$2.26) |\\n| AEVC | **91.82($\\\\pm$3.78)** | 44.61($\\\\pm$4.31) | 39.68($\\\\pm$1.36) | 29.85($\\\\pm$0.02) | 50.88($\\\\pm$0.24) | **27.56($\\\\pm$1.11)** | 54.89($\\\\pm$0.63) |\\n| Ours | 85.47($\\\\pm$0.00) | **80.66($\\\\pm$0.00)** | **52.44($\\\\pm$0.00)** | 26.22($\\\\pm$0.00) | **54.26($\\\\pm$0.00)** | 26.83($\\\\pm$0.00) | **57.36($\\\\pm$0.00)** |\\n| NMI(\\\\%) | | | | | | | |\\n| FMVACC | 80.78($\\\\pm$4.44) | 38.41($\\\\pm$2.92) | 33.50($\\\\pm$2.56) | 9.94($\\\\pm$1.54) | 28.50($\\\\pm$2.29) | 12.86($\\\\pm$0.67) | 57.82($\\\\pm$0.93) |\\n| 3AMVC | 57.43($\\\\pm$3.28) | 41.45($\\\\pm$4.42) | 26.63($\\\\pm$3.37) | **11.68($\\\\pm$2.11)** | 31.03($\\\\pm$1.66) | 14.02($\\\\pm$0.71) | 58.61($\\\\pm$1.62) |\\n| AEVC | 86.62($\\\\pm$1.95) | **49.15($\\\\pm$0.86)** | 17.79($\\\\pm$0.67) | 6.44($\\\\pm$0.02) | 24.47($\\\\pm$0.06) | 13.32($\\\\pm$0.50) | 53.55($\\\\pm$0.34) |\\n| Ours | **89.97($\\\\pm$0.00)** | 45.25($\\\\pm$0.00) | **43.70($\\\\pm$0.00)** | 6.25($\\\\pm$0.00) | **31.87($\\\\pm$0.00)** | **15.64($\\\\pm$0.00)** | **59.21($\\\\pm$0.00)** |\\n| Fscore(\\\\%) | | | | | | | |\\n| FMVACC | 80.15($\\\\pm$7.13) | 41.01($\\\\pm$4.20) | 38.20($\\\\pm$1.89) | 23.79($\\\\pm$0.77) | 43.86($\\\\pm$2.61) | 17.07($\\\\pm$0.35) | 48.78($\\\\pm$1.94) |\\n| 3AMVC | 56.16($\\\\pm$4.80) | 38.28($\\\\pm$3.01) | 32.61($\\\\pm$2.53) | 27.58($\\\\pm$1.50) | 41.13($\\\\pm$1.30) | 17.40($\\\\pm$0.40) | 47.61($\\\\pm$1.62) |\\n| AEVC | **87.94($\\\\pm$3.70)** | 46.16($\\\\pm$1.37) | 26.57($\\\\pm$1.24) | 22.19($\\\\pm$0.01) | 36.19($\\\\pm$0.63) | 17.15($\\\\pm$0.27) | 45.99($\\\\pm$0.53) |\\n| Ours | 87.92($\\\\pm$0.00) | **78.12($\\\\pm$0.00)** | **41.12($\\\\pm$0.00)** | **28.55($\\\\pm$0.00)** | **44.84($\\\\pm$0.00)** | **20.64($\\\\pm$0.00)** | **51.37($\\\\pm$0.00)** |\\n\\n\\n\\nFrom this table, one can observe that our results are more desirable in most cases, which illustrates that our proposed alignment strategy is more worthy of recommendation. \\n\\n\\n[1] Wang et al., Align then fusion: Generalized large-scale multi-view clustering with anchor matching correspondences, NeurIPS, 2022. \\n\\n[2] Ma et al., Automatic and Aligned Anchor Learning Strategy for Multi-View Clustering, ACM MM, 2024. \\n\\n[3] Liu et atl., Learn from view correlation: An anchor enhancement strategy for multi-view clustering. IEEE CVPR, 2024.\"}", "{\"title\": \"Response to Reviewer 2FNR(Part-2)\", \"comment\": \"**A1(2):** Additionally, we also conduct experiments to validate the effectiveness of our alignment strategy, as shown in the following table where 'Wo-A' and 'WA' denote the results without and with involving alignment respectively.\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| Wo-A| 80.73 | 76.59 | 31.65 | 16.67 | 45.29 | 25.91 | 53.68 |\\n| WA| **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| Wo-A| 82.53 | 39.55 | 35.41 | 3.32 | 24.77 | 15.30 | 56.47 |\\n| WA| **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| Wo-A| 79.47 | 72.23 | 30.69 | 21.14 | 42.59 | 17.90 | 47.41 |\\n| WA| **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\nAs seen, our alignment strategy is working and can effectively increase the clustering results. \\n\\n-----------\\n\\n\\nFurther, we also do alignment separately as current methods do, and the comparison results are reported in the following table where 'SA' denotes the results based on separate alignment. \\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| SA | 72.37 | 62.43 | 45.78 | 22.87 | 44.36 | 22.98 | 51.22 |\\n| Ours | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| SA | 71.97 | 41.24 | 34.76 | 5.78 | 27.31 | **15.73** | 52.73 |\\n| Ours | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | 15.64 | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| SA | 70.38 | 59.32 | 33.47 | 25.46 | 39.84 | 17.96 | 46.31 |\\n| Ours | **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\n\\nEvidently, our joint-alignment mechanism makes more impressive clustering results. \\n\\n-------------\\n\\nFurthermore, we also conduct experiments to demonstrate that the geometric features between anchors are beneficial for the clustering performance improvement. \\nThe comparison results are presented in the following table where 'NAA' denotes the results not considering the characteristics between anchors. \\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| NAA | 71.51 | 49.05 | 30.35 | 16.75 | 47.05 | 26.69 | 52.15 |\\n| Ours | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| NAA | 83.97 | 40.21 | 6.02 | 2.53 | 23.19 | 15.48 | 58.13 |\\n| Ours | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| NAA | 73.79 | 51.25 | 30.42 | 28.54 | 43.04 | 17.70 | 46.77 |\\n| Ours | **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\n\\nIt can be seen that our results involving anchor-anchor characteristics are more encouraging. \\n\\n\\n\\n\\n\\n\\n\\n\\n\\n**Q2:** The authors do not compare the proposed method with theses popular deep learning ones. \\n\\n\\n**A2(1):** Thanks. We organize the comparative experiments with deep learning methods AdaGAE [1], DEMVC [2], MFLVC [3], DSMVC [4]. \\n\\nAdaGAE utilizes a graph auto-encoder to extract the potential high-level information behind data and the non-euclidean structure, and avoids the collapse by \\nbuilding the connections between sub-clusters before they become thoroughly random in the latent space.\\n\\n\\nDEMVC generates the embedded feature representations by deep auto-encoders, and adopts the auxiliary distribution generated by $k$-means to refine the deep auto-encoders and clustering soft assignments for all views. \\n\\n \\nMFLVC learns different levels of features from the raw features in a fusion-free manner to alleviate the conflict between learning consistent common semantics and reconstructing inconsistent view-private information. \\n\\nDSMVC concurrently exploits complementary information and discards the mean\\ningless noise by automatically selecting features to reduce the risk of clustering performance degradation caused by view increase.\\n\\n\\n [1] Li et al., Adaptive Graph Auto-Encoder for General Data Clustering, IEEE TPAMI, 2022. \\n\\n [2] Xu et al., Deep Embedded Multi-view Clustering with Collaborative Training, Information Sciences, 2021\\n\\n [3] Xu et al., Multi-Level Feature Learning for Contrastive Multi-View Clustering, IEEE CVPR, 2022. \\n\\n [4] Tang et al., Deep Safe Multi-view Clustering: Reducing the Risk of Clustering Performance Degradation Caused by View Increase, IEEE CVPR, 2022.\"}", "{\"title\": \"Response to SAC, AC and Reviewers\", \"comment\": \"Dear SAC, AC and Reviewers,\\n\\nWe sincerely appreciate your precious time and profound comments. Your expertise and thorough evaluation have greatly enhanced the quality and clarity of our research. The constructive criticism and thoughtful suggestions have been instrumental in strengthening our work and refining our ideas! \\n\\n\\nIn this work, we devise a joint-alignment mechanism, which does not require selecting the baseline view as current methods do and also can coordinate with the generation of anchors, to alleviate the anchor mismatching issue. It flexibly rearranges anchors in their original dimension space according to the characteristics of respective view itself, and makes view information able to interact across different levels. \\n\\n\\nMoreover, we explicitly take into account the geometric characteristics between (aligned) anchors, and successfully feed them into the anchor-sample affinity to exploit the manifold structure hidden within original samples more sufficiently. \\n\\nFurther, we directly learn the consensus cluster indicators that bride all anchors, permutations and views. This not only gathers multi-view information at the cluster-label level, but provides common structure for anchors on different views to induce them rearranging towards correct-aligning direction.\\n\\nMeanwhile, a feasible solving scheme with linear complexity enables our model to effectively and efficiently work. \\n\\n\\nIn addition to these, we also organized some new experiments against latest alignment methods and deep learning methods. The comparison results demonstrate that our proposed method provides preferable clustering performance. \\n\\n\\nThanks once again for your invaluable contributions and support throughout the review process. Please don't hesitate to contact us if nay questions. \\n\\nBest wishes,\\n\\nThe authors of 195\"}", "{\"title\": \"Thanks for Reviewer 4XPe\", \"comment\": \"Dear reviewer 4XPe,\\n\\nWe greatly value your insightful and constructive feedback. We hope that our response has addressed your concerns. If you have any further suggestions or questions, please do not hesitate to share them. We are more than willing to discuss them with you.\\n\\nWe fully understand that you are extremely busy, so would greatly appreciate your time in this process.\\n\\nBest wishes,\\n\\nThe authors of 195\"}", "{\"title\": \"Response to Reviewer 4XPe (Part-2)\", \"comment\": \"**A2(2):** About the role of anchor alignment, in this model, it aims at rearranging anchors to build pure self-expression affinities. If not aligning, the structure of generated anchor-anchor affinity on each view will be chaotic, and accordingly will deteriorate the anchor-sample relationship, hindering the clustering performance. To validate this point, we conduct the comparison experiments without alignment, as shown in the following table where 'NA' denotes the results based on no-alignment.\\n\\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| NA | 80.73 | 76.59 | 31.65 | 16.67 | 45.29 | 25.91 | 53.68 |\\n| Ours | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| NA | 82.53 | 39.55 | 35.41 | 3.32 | 24.77 | 15.30 | 56.47 |\\n| Ours | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| NA | 79.47 | 72.23 | 30.69 | 21.14 | 42.59 | 17.90 | 47.41 |\\n| Ours | **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\n\\n\\nIt can be observed that our results containing alignment evidently outperform these without alignment. \\n\\n\\n\\n---------------------------\\n\\n About the specific conditions or datasets where the necessity of anchor alignment might be relaxed or modified, considering the fact that multi-view data typically contains complementary view information and consensus view information and thereby more detailedly describes the instances, if the consensus information outweighs the complementary information, \\t we could build unified anchors as previous methods do to construct the similarity and thereby avoid/relax alignment. However, due to the complexity of multi-view data, how to effectively measure the complementary information and consensus information is a challenging task. In the future, we will try to do this from the perspective of information theory.\"}", "{\"title\": \"Response to Reviewer zrPV(Part-2)\", \"comment\": \"**Q2:** \\tWhy learn an anchor affinity matrix $\\\\mathbf{S}_p$ for each view separately? It seems to overlook inter-view interactions. Why not directly learn a consensus anchor affinity matrix? Will it improve the performance?\\n\\n**A2:** Thanks. The motivation for doing so is that each view typically owns exclusive characteristics and learning an anchor affinity matrix for each view could better exploit the features belonging to each view itself. \\n\\n---------\\n\\nAbout the inter-view interactions, all anchor-anchor and anchor-sample affinities on views can communicate with each other via the shared cluster indicator matrix $\\\\mathbf{C}$ owing to the joint-optimization mechanism. (Kindly note that $\\\\mathbf{L_s}$ consists of the anchor-anchor affinity $\\\\mathbf{S}_p$). \\n\\n------\\n\\nDirectly learning a consensus anchor affinity matrix for all views could omit some informative features of certain views. Of course, this is also a flexible scheme for MVC. We conduct some experiments to validate the clustering performance under this situation, as shown in the following table where 'CAA' denotes the results based on the consensus anchor affinity. (We do not include the variance terms since their variances are all zero because of the embedding-free property.) \\n\\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| CAA | 81.23 | 71.33 | 49.73 | 24.97 | 48.46 | 22.36 | 53.38 |\\n| Ours | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| CAA | 82.76 | 42.26 | 39.87 | 6.03 | 29.89 | 13.43 | 51.97 |\\n| Ours | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| CAA | 80.64 | 69.72 | **42.28** | 25.21 | **46.13** | 19.58 | **52.21** |\\n| Ours | **87.92** | **78.12** | 41.12 | **28.55** | 44.84 | **20.64** | 51.37 |\\n\\n\\n It can be seen that we receive preferable results in most cases.\"}", "{\"title\": \"Thanks for Reviewer zrPV\", \"comment\": \"Dear reviewer zrPV,\\n\\nWe greatly value your insightful and constructive feedback. We hope that our response has addressed your concerns. If you have any further suggestions or questions, please do not hesitate to share them. We are more than willing to discuss them with you.\\n\\nWe fully understand that you are extremely busy, so would greatly appreciate your time in this process.\\n\\nBest wishes,\\n\\nThe authors of 195\"}", "{\"summary\": \"This paper aims to address these problems: (1) they existing methods focus only on the affinity relationship between anchors and samples, while overlooking that between anchors; (2) the cluster order is inconsistent across views and accordingly anchors encounter misalignment issue due to the lack of data labels. The proposed method explicitly exploits the geometric properties between anchors via self-expression learning skill, and utilizes topology learning strategy to feed captured anchor-anchor features into anchor-sample graph so as to explore the manifold structure hidden within samples more adequately. Experiments on multiple publicly available datasets confirm the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) The proposed method considers the affinity relationship between anchors.\\n(2) The proposed method devises a joint-alignment mechanism that not only eliminates the need for selecting the baseline view but also coordinates well with the generation of anchors.\\n(3) The proposed method has linear complexity for the loss function.\", \"weaknesses\": \"(1) The novelty of this work is limited since the involved components have been widely used for anchor learning and spectral clustering. The authors only perform these components on the anchor data.\\n(2) The authors do not compare the proposed method with theses popular deep learning ones.\", \"questions\": \"The authors should check the data since some methods, such as OrthNTF and GSC, since the performance of these new methods is very poor.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the DLA-EF-JA model, a multi-view clustering technique that leverages dual-level affinity to capture both anchor-sample and anchor-anchor relationships within data. The model introduces a joint-alignment mechanism to address the anchor misalignment problem across views, which eliminates the need for a baseline view. Unlike traditional embedding methods, DLA-EF-JA generates cluster labels directly, reducing variance and improving clustering stability. Extensive experiments across diverse datasets demonstrate that the proposed model achieves competitive performance compared to existing multi-view clustering methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The model\\u2019s dual-level affinity mechanism effectively captures both anchor-sample and anchor-anchor relationships, enhancing clustering accuracy by leveraging a fuller view of the data structure.\\n2. The flexible joint-alignment method addresses anchor misalignment issues without requiring a fixed baseline view, making the model versatile for clustering data from different sources.\\n3. The model's effectiveness is demonstrated through comprehensive evaluation on multiple datasets, highlighting its adaptability and strong performance across different data types and views.\", \"weaknesses\": \"1. **Limited Learning of Cross-View Complementarity:** While the model integrates anchor relations, it lacks complex constraints like the Schatten p-norm that could help capture deeper cross-view complementarities. This may limit the model\\u2019s ability to fully leverage unique, complementary information in views with highly distinct features or dimensions. How does the model handle scenarios where the quality of anchors varies significantly across different views?\\n\\n2. **Necessity of Anchor Alignment:** The reliance on anchor alignment to maintain cross-view consistency introduces additional computational steps. Although this approach appears beneficial, some recent multi-view clustering methods successfully avoid alignment through feature space fusion or shared representations. It would be useful for the authors to elaborate on the essential role of anchor alignment in this model and under what conditions it might be adapted or simplified. Are there specific conditions or datasets where the necessity of anchor alignment might be relaxed or modified?\\n\\n3. **Complexity of the Model:** The model is somewhat complex, introducing more variables and mathematical processes. A more detailed explanation of the transition from Equation 2 to Equation 3 would enhance reader understanding of the methodology.\\n\\n4. **Hyperparameter Tuning Requirement:** The model\\u2019s performance is sensitive to carefully tuned hyperparameters, such as \\u03bb and \\u03b2. Can the authors provide further insights into the potential effects of anchor noise and how it could be mitigated to improve robustness?\", \"questions\": \"Same as weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Ccq1 (Part-2)\", \"comment\": \"**A1(2):** \\tBesides, we also conduct separate-alignment (SA) as current methods do, and the comparison results are summarized in the following table. (We omit the variance items since they are all zero.)\\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:----------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|\\n| ACC(\\\\%) | | | | | | | |\\n| SA | 72.37 | 62.43 | 45.78 | 22.87 | 44.36 | 22.98 | 51.22 |\\n| Ours | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| SA | 71.97 | 41.24 | 34.76 | 5.78 | 27.31 | **15.73** | 52.73 |\\n| Ours | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | 15.64 | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| SA | 70.38 | 59.32 | 33.47 | 25.46 | 39.84 | 17.96 | 46.31 |\\n| Ours | **87.92** | **78.12** | **41.12** | **28.55** | **44.84** | **20.64** | **51.37** |\\n\\n\\n\\nEvidently, our joint-alignment receives preferable results in most cases. \\n\\n\\n\\n**Q2:** The comparison methods lack some latest works. For example, the reference Liu 2024 (in line 581) was discussed in this paper, which includes anchor alignment mechanism, but it is not compared with the proposed work. \\n\\n\\n**A2:** Thanks. We organize some new comparison experiments in which [1], [2] and [3] are all based on anchor alignment methods. ([3] is the reference Liu 2024 mentioned.) \\n\\nRef [1] utilizes feature information and structure information of the bipartite graph generated by fixed anchors to build the matching relationship, and regards the first view of each dataset as the baseline view. \\n\\nRef [2] gets rid of prior knowledge by identifying and selecting discriminative anchors within a single view using hierarchical searching, and takes the view exhibiting the highest anchor graph quality as the baseline view. \\n\\nRef [3] narrows the spatial distribution of anchors on similar views by leveraging the inter-view correlations to enhance the expression ability of anchors, and treats the view concatenated by column as the baseline view. \\n\\t\\n\\t\\nPlease see the above table in **A1(1)** for experimental results. \\n\\n\\n\\n\\n**Q3:** In Table 1, several compared methods exhibit extremely poor performance on some datasets. It might be better if the authors could explain the possible reasons. \\n\\n**A3:** Thanks. We sincerely apologize for causing the confusion to Reviewer Ccq1. The reasons are that the default parameter settings in the released code are inconsistent with that in the paper. We have carefully corrected these according to the guidance in their paper, and reorganized the comparison experiments. Please check it. We deeply appreciate Reviewer Ccq1 for reminding us this, which significantly helps us improve the quality of this manuscript. \\n\\n\\nEspecially, PMSC, AMGL and MLRSSC still express inferior performance in certain scenarios, the reasons of which could be that PMSC reaches the consensus clustering under the premise of the basic partition realizing the ground truth and meanwhile equally treats every view, the factors generated by the cluster indicator with orthogonal constraints in AMGL impair the discriminability of some graphs, and MLRSSC linearly combines the generated representation matrices and only utilizes truncation operation to determine the penalty parameters.\\n\\n\\n\\n [1] Wang et al., Align then fusion: Generalized large-scale multi-view clustering with anchor matching correspondences, NeurIPS, 2022. \\n\\n [2] Ma et al., Automatic and Aligned Anchor Learning Strategy for Multi-View Clustering, ACM MM, 2024. \\n\\n [3] Liu et atl., Learn from view correlation: An anchor enhancement strategy for multi-view clustering. IEEE CVPR, 2024\"}", "{\"title\": \"Response to Reviewer 4XPe (Part-1)\", \"comment\": \"We very thank Reviewer 4XPe's thoughtful suggestions and guidance for the revision of this manuscript. All concerns have been carefully responded point by point. We sincerely hope these issues have been cleared.\\n\\n**Q1:** \\tIt lacks complex constraints like the Schatten p-norm that could help capture deeper cross-view complementarities. \\tHow does the model handle scenarios where the quality of anchors varies significantly across different views? \\n\\n**A1:** Thanks! This is a very promising research direction. The tensor Schatten p-norm is commonly regarded as a good means to exploit the complementary information between views, like [1], [2], [3], [4], [5], [6], etc. Our model at present dose not involve the Schatten p-norm, and adopts the shared cluster indicator matrix to capture view complementary information. Including the Schatten p-norm could help further enhance the clustering ability of our model. We will pay efforts to explore this in the future. We sincerely appreciate the Reviewer 4XPe for his/he fairly constructive suggestions!\\n\\n\\nAbout handling the scenarios where the quality of anchors\\tvaries significantly across different views, in this work, we utilize a learnable view-related weighting scheme to adaptively adjust the contributions of each view so as to balance the importance of anchors on the view. Perhaps, the anchor-wise weighting scheme will be more advisable since anchors on the same one view also could own diverse importance. We will further investigate this in the future. Thanks Reviewer 4XPe for bringing this to our attention.\\n\\n\\n\\n**Q2:** The reliance on anchor alignment to maintain cross-view consistency introduces additional computational steps. Some methods avoid alignment through feature space fusion or shared representations. It would be useful for the authors to \\n elaborate on the essential role of anchor alignment and under what conditions it might be adapted or simplified. Are there specific conditions or datasets where the necessity of anchor alignment might be relaxed or modified? \\n\\n**A2(1):** Thanks. The anchor alignment indeed introduces additional computational steps due to the need for optimizing the permutation variables. \\n\\n-- -- -- -- \\n\\t\\nThe works based on feature space fusion or shared representations usually extract a group of unified anchors rather than multiple groups of view-specific anchors to construct the similarity relationship. Although avoiding alignment, this paradigm could not effectively exploit complementary information between views due to the unified anchors being shared for all views. To further illustrate this point, \\nwe organize the comparison experiments between unified anchors (UA) and view-specific anchors (VSA, i.e., ours). The results are presented in the following table. \\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| UA | 81.23 | 71.33 | 49.73 | 24.97 | 48.46 | 22.36 | 53.38 |\\n| VSA | **85.47** | **80.66** | **52.44** | **26.22** | **54.26** | **26.83** | **57.36** |\\n| NMI(\\\\%) | | | | | | | |\\n| UA | 82.76 | 42.26 | 39.87 | 6.03 | 29.89 | 13.43 | 51.97 |\\n| VSA | **89.97** | **45.25** | **43.70** | **6.25** | **31.87** | **15.64** | **59.21** |\\n| Fscore(\\\\%) | | | | | | | |\\n| UA | 80.64 | 69.72 | **42.28** | 25.21 | **46.13** | 19.58 | **52.21** |\\n| VSA | **87.92** | **78.12** | 41.12 | **28.55** | 44.84 | **20.64** | 51.37 |\\n\\n\\nAs seen, our results are more preferable than the counterparts adopting unified anchors, the reason of which could be that the\\tview-exclusive complementary representation information (view-specific (aligned) anchors contain) outweighs the view-common consensus representation information (unified anchors contain). \\n\\n\\n\\n\\n\\n\\t\\n\\t\\n\\t[1] Guo et al., Logarithmic Schatten-p Norm Minimization for Tensorial Multi-View Subspace Clustering, IEEE TPAMI, 2023.\\n\\t\\n\\t[2] Xia et al., Tensorized Bipartite Graph Learning for Multi-view Clustering, IEEE TPAMI, 2023.\\n\\t\\n\\t[3] Feng et al., Federated Fuzzy C-means with Schatten-p Norm Minimization, ACM MM, 2024.\\n\\t\\n\\t[4] Li et al., Label Learning Method Based on Tensor Projection, ACM KDD, 2024. \\n\\t\\n\\t[5] Sun et al., Improved Weighted Tensor Schatten p-Norm for Fast Multi-view Graph Clustering, ACM MM, 2024. \\n\\t\\n\\t[6] Wang et al., Bi-Nuclear Tensor Schatten-p Norm Minimization for Multi-View Subspace Clustering, IEEE TIP, 2023.\"}", "{\"title\": \"Thanks for Reviewer Ccq1\", \"comment\": \"Dear reviewer Ccq1,\\n\\nWe greatly value your insightful and constructive feedback. We hope that our response has addressed your concerns. If you have any further suggestions or questions, please do not hesitate to share them. We are more than willing to discuss them with you. \\n\\n\\nWe fully understand that you are extremely busy, so would greatly appreciate your time in this process. \\n\\nBest wishes,\\n\\nThe authors of 195\"}", "{\"summary\": \"In this work, a multi-view clustering method with joint anchor alignment was developed, which introduces dual-level affinity and achieves embedding-free clustering. The work is designed to address several problems due to the anchor misalignment issues. Therefore, the authors introduce a permutation mechanism for each view to jointly adjust the anchors. Besides, the method is free of learning the embedding by constructing the cluster labels directly from original samples. A self-expression learning structure is utilized on the anchors, which utilizes topology learning strategy to feed captured anchor-anchor features into anchor-sample graph. Extensive experiments validate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-structured, and the authors conduct a relatively comprehensive review on existing literatures.\\n2.\\tThe experimental results demonstrate the effectiveness of the work\", \"weaknesses\": \"1.\\tA core idea of the work is to introduce an anchor permutation matrix, while this idea has been widely adopted by previous works. Hence, the novelty of the paper might not be sufficient to be published.\\n2.\\tThe comparison methods lack some latest works. Since the work is an anchor alignment based method, more related works with anchor alignment should be compared. For example, the reference Liu 2024 (in line 581) was discussed in this paper, which includes anchor alignment mechanism, but it is not compared with the proposed work.\\n3.\\tIn Table 1, several compared methods exhibit extremely poor performance on some datasets (e.g., PMSC on Cora, AMGL on DeRMATO). It might be better if the authors could explain the possible reasons.\\n4.\\tTable 5 does not include all the symbols. The Methodology section might be too brief, which should be introduced with more details by explaining the reasons for the design of each component.\", \"questions\": \"1.\\tWhat is the difference between the anchor alignment module with those of existing works?\\n2.\\tWhy do some compared methods exhibit extremely poor performance on some datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Ccq1 (Part-1)\", \"comment\": \"We very thank Reviewer Ccq1's constructive comments and guidance for the revision of this manuscript. All concerns have been carefully responded point by point. We sincerely hope these issues have been cleared.\\n\\n\\n**Q1:** A core idea of the work is to introduce an anchor permutation matrix, while this idea has been widely adopted by previous works. \\n\\n**A1(1):** Thanks. Current permutation strategies generally require selecting the baseline view, such as [1], [2], [3]. Moreover, the anchor generation, the anchor transformation and the graph construction are separated from each other, which hinders the interaction of view information across different levels. Unlike them, in this work, we associate a learnable permutation for each view to freely rearrange anchors during their original space, successfully unify anchor generation and anchor transformation as well as graph construction within one common framework, and meanwhile provide a feasible solving solution with linear complexity. Owing to the joint-alignment mechanism, we do not involve selecting the baseline view, and meanwhile anchors can be permuted automatically according to respective view characteristics. Also, this paradigm can coordinate with the learning of anchors. \\n\\nParticularly, we conduct some experiments against [1], [2] and [3] to demonstrate the advantages of our alignment mechanism, as shown in the following table. \\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:----------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|\\n| ACC(\\\\%) | | | | | | | |\\n| Ref [1] | 74.13($\\\\pm$3.36) | 53.34($\\\\pm$2.84) | 47.10($\\\\pm$4.07) | 22.95($\\\\pm$0.89) | 42.31($\\\\pm$3.17) | 25.58($\\\\pm$0.66) | 55.44($\\\\pm$2.25) |\\n| Ref [2] | 62.60($\\\\pm$5.50) | 38.17($\\\\pm$3.58) | 44.73($\\\\pm$3.74) | **33.31($\\\\pm$1.19)** | 49.56($\\\\pm$2.45) | 26.38($\\\\pm$0.93) | 57.24($\\\\pm$2.26) |\\n| Ref [3] | **91.82($\\\\pm$3.78)** | 44.61($\\\\pm$4.31) | 39.68($\\\\pm$1.36) | 29.85($\\\\pm$0.02) | 50.88($\\\\pm$0.24) | **27.56($\\\\pm$1.11)** | 54.89($\\\\pm$0.63) |\\n| Ours | 85.47($\\\\pm$0.00) | **80.66($\\\\pm$0.00)** | **52.44($\\\\pm$0.00)** | 26.22($\\\\pm$0.00) | **54.26($\\\\pm$0.00)** | 26.83($\\\\pm$0.00) | **57.36($\\\\pm$0.00)** |\\n| NMI(\\\\%) | | | | | | | |\\n| Ref [1] | 80.78($\\\\pm$4.44) | 38.41($\\\\pm$2.92) | 33.50($\\\\pm$2.56) | 9.94($\\\\pm$1.54) | 28.50($\\\\pm$2.29) | 12.86($\\\\pm$0.67) | 57.82($\\\\pm$0.93) |\\n| Ref [2] | 57.43($\\\\pm$3.28) | 41.45($\\\\pm$4.42) | 26.63($\\\\pm$3.37) | **11.68($\\\\pm$2.11)** | 31.03($\\\\pm$1.66) | 14.02($\\\\pm$0.71) | 58.61($\\\\pm$1.62) |\\n| Ref [3] | 86.62($\\\\pm$1.95) | **49.15($\\\\pm$0.86)** | 17.79($\\\\pm$0.67) | 6.44($\\\\pm$0.02) | 24.47($\\\\pm$0.06) | 13.32($\\\\pm$0.50) | 53.55($\\\\pm$0.34) |\\n| Ours | **89.97($\\\\pm$0.00)** | 45.25($\\\\pm$0.00) | **43.70($\\\\pm$0.00)** | 6.25($\\\\pm$0.00) | **31.87($\\\\pm$0.00)** | **15.64($\\\\pm$0.00)** | **59.21($\\\\pm$0.00)** |\\n| Fscore(\\\\%) | | | | | | | |\\n| Ref [1] | 80.15($\\\\pm$7.13) | 41.01($\\\\pm$4.20) | 38.20($\\\\pm$1.89) | 23.79($\\\\pm$0.77) | 43.86($\\\\pm$2.61) | 17.07($\\\\pm$0.35) | 48.78($\\\\pm$1.94) |\\n| Ref [2] | 56.16($\\\\pm$4.80) | 38.28($\\\\pm$3.01) | 32.61($\\\\pm$2.53) | 27.58($\\\\pm$1.50) | 41.13($\\\\pm$1.30) | 17.40($\\\\pm$0.40) | 47.61($\\\\pm$1.62) |\\n| Ref [3] | **87.94($\\\\pm$3.70)** | 46.16($\\\\pm$1.37) | 26.57($\\\\pm$1.24) | 22.19($\\\\pm$0.01) | 36.19($\\\\pm$0.63) | 17.15($\\\\pm$0.27) | 45.99($\\\\pm$0.53) |\\n| Ours | 87.92($\\\\pm$0.00) | **78.12($\\\\pm$0.00)** | **41.12($\\\\pm$0.00)** | **28.55($\\\\pm$0.00)** | **44.84($\\\\pm$0.00)** | **20.64($\\\\pm$0.00)** | **51.37($\\\\pm$0.00)** |\\n\\n\\nAs seen, our results are more desirable in most cases. \\n\\n\\n\\n\\t[1] Wang et al., Align then fusion: Generalized large-scale multi-view clustering with anchor matching correspondences, NeurIPS, 2022. \\n\\t\\n\\t[2] Ma et al., Automatic and Aligned Anchor Learning Strategy for Multi-View Clustering, ACM MM, 2024. \\n\\t\\n\\t[3] Liu et atl., Learn from view correlation: An anchor enhancement strategy for multi-view clustering. IEEE CVPR, 2024.\"}", "{\"title\": \"Response to Reviewer 4XPe (Part-3)\", \"comment\": \"**Q3:** \\tThe model is somewhat complex, introducing more variables and mathematical processes. A more detailed explanation of the transition from Eq.(2) to Eq.(3) would enhance reader understanding of the methodology.\\n\\n**A3:** Good suggestion! We here manage to explain the objective transition as much detail as possible. \\n\\nThe objective function in Eq.(2) is $\\\\sum _{p=1}^v \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{Z} _{p} \\\\right\\\\|| _F^2 + \\n\\\\lambda \\\\left\\\\|| \\\\mathbf{A} _{p} -\\\\mathbf{A} _{p} \\\\mathbf{S} _{p} \\\\right\\\\||_F^2 + \\\\beta \\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{Z} _p] _{i,:} - [\\\\mathbf{Z} _p] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j} $. \\n\\n The objective function in Eq.(3) is \\n$\\\\sum _{p=1}^v \\\\boldsymbol{\\\\alpha} _p^2 \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\|| _F^2 + \\\\lambda \\\\left\\\\|| \\\\mathbf{A} _{p} \\\\mathbf{T} _p - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{S} _{p} \\\\right\\\\|| _F^2 + \\\\beta \\\\operatorname{Tr}( \\\\mathbf{B} _p^{\\\\top} \\\\mathbf{L _s} \\\\mathbf{B} _p \\\\mathbf{C} \\\\mathbf{C} ^{\\\\top} ) $.\\n\\t\\nFirstly, considering that the essence of anchor misalignment is that the order of anchors on different views is not identical, we can eliminate the misalignment problem via re-arranging anchors. Specially, we associate each view with a learnable matrix $\\\\mathbf{T}_p$ to flexibly transform anchors according to the characteristics of respective view itself. (Kindly note that owing to transforming anchors in original view space, this can well preserve the view diversity.) Accordingly, the anchor matrix $\\\\mathbf{A}_p$ on each view is reformulated as $\\\\mathbf{A}_p \\\\mathbf{T}_p$, the self-expression affinity learning $ \\\\left\\\\|| \\\\mathbf{A} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{S} _{p} \\\\right\\\\||_F^2 $ becomes $ \\\\left\\\\|| \\\\mathbf{A} _{p} \\\\mathbf{T} _p - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{S} _{p} \\\\right\\\\||_F^2 $, and the reconstruction error item $ \\\\left\\\\||\\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{Z} _{p} \\\\right\\\\||_F^2$ becomes $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{Z} _{p} \\\\right\\\\|| _F^2 $. \\n\\t \\nThen, due to variance arising from the construction of embedding, we avoid forming embedding, and choose to directly learn the cluster indicators. We factorize the anchor graph as a basic coefficient matrix and a consensus matrix, and utilize binary learning to optimize the consensus matrix. Therefore, we have that the reconstruction error item $\\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{Z} _{p} \\\\right\\\\|| _F^2 $ is reformulated as $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\|| _F^2$, and the point-point guidance $\\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{Z} _p] _{i,:} - [\\\\mathbf{Z} _p] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j}$ is reformulated as $\\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{B} _p \\\\mathbf{C}] _{i,:} - [\\\\mathbf{B} _p \\\\mathbf{C}] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j}$, which can be equivalently written as the matrix trace form of $\\\\operatorname{Tr}(\\\\mathbf{B} _p^{\\\\top} \\\\mathbf{L _s} \\\\mathbf{B} _p \\\\mathbf{C} \\\\mathbf{C}^{\\\\top})$. $\\\\mathbf{L_s} = \\\\mathbf{D}_p - \\\\mathbf{S}_p$, $\\\\mathbf{D} _p = diag(\\\\sum _{j=1}^{m} [\\\\mathbf{S} _p] _{i,j}~| i=1,\\\\cdots, m)$. (Kindly note that the consensus cluster indicator matrix $\\\\mathbf{C}$ provides a common structure for anchors on all views, inducing them rearranging towards the common structure.)\\n\\t\\nFinally, considering that views typically have different levels of importance, we introduce a learnable weighting variable for each view to automatically measure its contributions. Therefore, we have that $\\\\sum _{p=1}^{v} \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\||_F^2 $ is reformulated as $\\\\sum _{p=1}^{v} \\\\boldsymbol{\\\\alpha} _p^2 \\\\left\\\\||\\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\||_F^2$. \\n\\t\\nAt this point, we can obtain the objective function like Eq.(3).\\n\\t\\nFor the feasible region, $ \\\\\\\\{ \\\\mathbf{T} _p^{\\\\top} \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{T} _p \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{T} _p \\\\in \\\\\\\\{0,1\\\\\\\\}^{m \\\\times m} \\\\\\\\}$ denotes only re-arranging anchors and does not change the anchor values. $\\\\\\\\{ \\\\mathbf{S} _{p}^{\\\\top} \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{S} _{p} \\\\geq 0, \\\\sum _{i=1}^{m} [\\\\mathbf{S} _{p}] _{i,i}=0 \\\\\\\\} $ denotes expressing oneself using other anchors and \\tmeanwhile avoids expressing oneself using oneself. $\\\\{\\\\mathbf{B} _p^{\\\\top} \\\\mathbf{B} _p = \\\\mathbf{I} _k \\\\} $ denotes learning discriminative basic coefficients. $\\\\\\\\{ \\\\sum _{i=1}^{k} \\\\mathbf{C} _{i,j} =1, j= {1, 2, \\\\dots, n}, \\\\mathbf{C} \\\\in \\\\\\\\{0,1\\\\\\\\}^{k \\\\times n} \\\\\\\\} $ denotes that each column has only one non-zero element, that is, a sample belongs to only one cluster. $ \\\\\\\\{ \\\\boldsymbol{\\\\alpha} ^{\\\\top} \\\\mathbf{1} = 1, \\\\boldsymbol{\\\\alpha} \\\\geq 0 \\\\\\\\} $ denotes normalizing and meanwhile avoids trivial solutions.\"}", "{\"title\": \"Response to Reviewer Ccq1 (Part-4)\", \"comment\": \"**A4(2):** Subsequently, considering that the variance arises from the construction of embedding, we avoid forming embedding and choose to directly learn the cluster indicators. Specially, we factorize the anchor graph as a basic coefficient matrix $\\\\mathbf{B}_p$ and a consensus matrix $\\\\mathbf{C}$, and utilize binary learning to optimize the consensus matrix. Therefore, we have that the term $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{Z} _{p} \\\\right\\\\|| _F^2 $\\n is reformulated as $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\|| _F^2$. The point-point guidance term $\\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{Z} _p] _{i,:} - [\\\\mathbf{Z} _p] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j}$ is reformulated as $\\\\sum _{i,j=1}^{m} \\\\left\\\\|| [\\\\mathbf{B} _p \\\\mathbf{C}] _{i,:} - [\\\\mathbf{B} _p \\\\mathbf{C}] _{j,:} \\\\right\\\\|| _2^2 [\\\\mathbf{S} _p] _{i,j}$, which can be equivalently transformed as the matrix trace form of $\\\\operatorname{Tr}(\\\\mathbf{B}_p ^{\\\\top} \\\\mathbf{L _s} \\\\mathbf{B} _p \\\\mathbf{C} \\\\mathbf{C}^{\\\\top})$. $\\\\mathbf{L _s} = \\\\mathbf{D} _p - \\\\mathbf{S} _p$, $\\\\mathbf{D} _p = diag( \\\\sum _{j=1}^{m} [\\\\mathbf{S} _p] _{i,j}~| i=1,\\\\dots, m )$. This paradigm not only makes the consensus matrix $\\\\mathbf{C}$ successfully represent the cluster indicators, but also provides a common structure for anchors on all views, inducing them rearranging towards the corresponding matching relationship. \\n \\n\\n\\nAt the last, due to views generally having different levels of importance, we assign a weighting variable to each view to adaptively adjust its contributions. Accordingly, the term $ \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\||_F^2$ is further reformulated as $ \\\\boldsymbol{\\\\alpha} _p^2 \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\|| _F^2$.\\n\\n\\nBased on the above analysis, we have that the objective is formulated as $ \\\\sum _{p=1}^v \\\\boldsymbol{\\\\alpha} _p^2 \\\\left\\\\|| \\\\mathbf{X} _{p} - \\\\mathbf{A} _{p} \\\\mathbf{T} _{p} \\\\mathbf{B} _{p} \\\\mathbf{C} \\\\right\\\\|| _F^2 + \\\\lambda \\\\left\\\\|| \\\\mathbf{A} _{p} \\\\mathbf{T} _p - \\\\mathbf{A} _{p} \\\\mathbf{T} _p \\\\mathbf{S} _{p} \\\\right\\\\|| _F^2 + \\\\beta \\\\operatorname{Tr}( \\\\mathbf{B} _p^{\\\\top} \\\\mathbf{L_s} \\\\mathbf{B} _p \\\\mathbf{C} \\\\mathbf{C}^{\\\\top} ) $.\\n \\nThe first item aims at building the similarity via minimizing the reconstruction error. The second item represents the self-expression affinity of aligned anchors. The third item plays a role in feeding anchor-anchor characteristics into anchor-sample. \\n\\n\\n\\n\\n\\n\\n\\nFurther, the constraints $ \\\\boldsymbol{\\\\alpha} ^{\\\\top} \\\\mathbf{1} = 1 $ and $ \\\\boldsymbol{\\\\alpha} \\\\geq 0 $ aim at doing normalization and meanwhile avoid trivial solutions. $ \\\\{ \\\\mathbf{B} _p^{\\\\top} \\\\mathbf{B} _p = \\\\mathbf{I} _k \\\\} $ aims at learning discriminative basic coefficients. $ \\\\\\\\{ \\\\mathbf{T} _p^{\\\\top} \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{T} _p \\\\mathbf{1} = \\\\mathbf{1}, \\\\mathbf{T} _p \\\\in \\\\\\\\{0,1\\\\\\\\} ^{m \\\\times m} \\\\\\\\} $ aims at rearranging anchors and meanwhile guarantees not to change the values of anchors. $ \\\\\\\\{ \\\\sum _{i=1}^{k} \\\\mathbf{C} _{i,j} =1, j= {1, 2, \\\\dots, n}, \\\\mathbf{C} \\\\in \\\\\\\\{0,1\\\\\\\\}^{k \\\\times n} \\\\\\\\}$ guarantees that there is only one non-zero element in each column, i.e., one sample belongs to only one cluster. $\\\\\\\\{ \\\\mathbf{S} _{p}^{\\\\top} \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{S} _{p} \\\\geq 0, \\\\sum _{i=1}^{m} [\\\\mathbf{S} _{p}] _{i,i}=0 \\\\\\\\} $ guarantees expressing oneself through other anchors while avoiding using oneself to express oneself. \\n\\n\\n\\n**Q5:** What is the difference between the anchor alignment module with those of existing works? \\n\\n**A5:** Thanks. The main differences are as follows,\\n \\n* We do not require selecting the baseline view. \\n\\t\\t\\n* We can coordinate with the generation of anchors. \\n \\n \\n\\t\\nThe selection of baseline view not only brings complicated solving procedure but also affects the clustering performance. If the baseline view is not well selected, the graph structure will be inaccurately fused. Unlike this, we do not require the baseline view, and can automatically rearrange anchors according to respective view characteristics. \\n\\t\\nBesides, we also can coordinate with anchors in the unified framework and thereby facilitate the learning of anchors, which makes view information interact across different levels.\"}", "{\"title\": \"Response to Reviewer zrPV(Part-3)\", \"comment\": \"**Q3:** How do you set the number of anchors? What is the influence of it?\\n\\n**A3:** Thanks. In experiments, we set the number of anchors to be equal to the number of clusters. The reasons are as follows. \\n\\t\\n\\t\\nWhen updating the variable $\\\\mathbf{T}_p$, the objective function is $ \\\\operatorname{Tr} \\\\left( \\\\mathbf{T} _{p}^{\\\\top} \\\\mathbf{G} _p \\\\mathbf{T} _{p} \\\\left( \\\\lambda \\\\mathbf{H} _{p} + \\\\boldsymbol{\\\\alpha} _p^2 \\\\mathbf{M} _p - 2\\\\lambda \\\\mathbf{S} _{p}^{\\\\top} \\\\right) -2\\\\boldsymbol{\\\\alpha}_p^2 \\\\mathbf{T} _{p}^{\\\\top} \\\\mathbf{J} _{p} \\\\right) $, which is the form of $\\\\mathbf{A}^\\\\top \\\\mathbf{B} \\\\mathbf{A} \\\\mathbf{C} + \\\\mathbf{A}^{\\\\top} \\\\mathbf{D}$. Besides, the feasible region \\n$\\\\\\\\{ \\\\mathbf{T}_p^{\\\\top} \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{T}_p \\\\mathbf{1}=\\\\mathbf{1}, \\\\mathbf{T}_p \\\\in \\\\\\\\{0,1\\\\\\\\}^{m \\\\times m} \\\\\\\\} $ is discrete. These cause this optimization problem being hard to solve. To this end, we adopt traversal searching on one-hot vectors to obtain the optimal solution. The traversal searching operation takes $\\\\mathcal{O}(m!)$ computing overhead where $m$ is the number of anchors. Too large $m$ will induce intensive time cost. Therefore, in all experiments, we set $m$ to the number of clusters $k$. More genius solving schemes could be further investigated in the future. \\n\\n\\n**Q4:** The experimental results are not convincing. For instance, OrthNTF achieves 69.4\\\\% Acc and 68.6\\\\% NMI values on the Reuters dataset, while this paper only reports 28.67\\\\% Acc and 3.07\\\\% NMI.\\n\\n**A4:** Thanks. We are so sorry for bringing some unnecessary confusion to Reviewer zrPV. \\n\\t\\nThis is mainly due to that the default hyper-parameter settings in the released code are not consistent with that in the paper. We have carefully corrected relevant parameter settings according to the suggestions presented in their paper and re-executed them. Please check the clustering result comparison table. Especially, for OrthNTF, on the dataset Reuters, the anchor number is automatically 100 in their experiments while it is automatically 93 in our experiments. The reason is that the dataset versions adopted in experiments are different. Accordingly, the generated clustering results are different. We execute OrthNTF under anchorRate=$[0.1, 0.2, 0.3, \\\\cdots, 1.0]$, $p=[0.1, 0.2, 0.3,\\\\cdots, 1.0]$, $\\\\lambda=[0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10, 50, 100, 500, 1000, 5000, 10000]$ respectively, and report the highest results. \\n\\n-------\\n\\t\\nThanks very much to Reviewer zrPV for bringing this point to our attention!\"}", "{\"title\": \"Response to Reviewer 2FNR(Part-3)\", \"comment\": \"**A2(2):** The comparison results are reported in the following table.\\n\\n\\n| Dataset | DERMATO | CALTE7 | Cora | REU7200 | Reuters | CIF10Tra4 | FasMNI4V |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| ACC(\\\\%) | | | | | | | |\\n| AdaGAE | 67.88($\\\\pm$0.99) | 42.20($\\\\pm$0.94) | 23.45($\\\\pm$0.29) | 19.43($\\\\pm$1.76) | - | - | - |\\n| DEMVC | 40.50($\\\\pm$0.88) | 54.41($\\\\pm$1.76) | 30.54($\\\\pm$1.64) | 24.58($\\\\pm$1.84) | 53.05($\\\\pm$0.91) | 26.75($\\\\pm$1.16) | 51.28($\\\\pm$0.95) |\\n| MFLVC | 80.73($\\\\pm$0.47) | 43.42($\\\\pm$0.26) | 31.02($\\\\pm$0.82) | 25.42($\\\\pm$1.47) | - | - | - |\\n| DSMVC | 72.35($\\\\pm$0.96) | 41.66($\\\\pm$1.30) | 28.88($\\\\pm$0.93) | 25.04($\\\\pm$0.47) | 53.66($\\\\pm$0.82) | 21.28($\\\\pm$0.99) | 55.33($\\\\pm$0.58) |\\n| Ours | **85.47($\\\\pm$0.00)** | **80.66($\\\\pm$0.00)** | **52.44($\\\\pm$0.00)** | **26.22($\\\\pm$0.00)** | **54.26($\\\\pm$0.00)** | **26.83($\\\\pm$0.00)** | **57.36($\\\\pm$0.00)** |\\n| NMI(\\\\%) | | | | | | | |\\n| AdaGAE | 78.47($\\\\pm$0.36) | 39.28($\\\\pm$0.19) | 5.23($\\\\pm$0.68) | 3.22($\\\\pm$0.27) | - | - | - |\\n| DEMVC | 31.03($\\\\pm$0.11) | 16.70($\\\\pm$0.64) | 6.34($\\\\pm$0.30) | 4.84($\\\\pm$0.90) | 34.21($\\\\pm$0.35) | **16.18($\\\\pm$0.90)** | **59.74($\\\\pm$0.91)** |\\n| MFLVC | 81.23($\\\\pm$0.10) | **58.74($\\\\pm$0.15)** | 12.97($\\\\pm$0.14) | 3.25($\\\\pm$0.90) | - | - | - |\\n| DSMVC | 76.15($\\\\pm$0.14) | 36.68($\\\\pm$0.18) | 8.14($\\\\pm$0.55) | 4.36($\\\\pm$0.41) | **35.43($\\\\pm$0.10)** | 8.82($\\\\pm$0.46) | 55.33($\\\\pm$0.97) |\\n| Ours | **89.97($\\\\pm$0.00)** | 45.25($\\\\pm$0.00) | **43.70($\\\\pm$0.00)** | **6.25($\\\\pm$0.00)** | 31.87($\\\\pm$0.00) | 15.64($\\\\pm$0.00) | 59.21($\\\\pm$0.00) |\\n| Fscore(\\\\%) | | | | | | | |\\n| AdaGAE | 67.74($\\\\pm$0.79) | 50.51($\\\\pm$0.41) | 23.68($\\\\pm$0.14) | 19.61($\\\\pm$1.23) | - | - | - |\\n| DEMVC | 41.80($\\\\pm$1.04) | 50.60($\\\\pm$1.59) | 27.66($\\\\pm$1.52) | 22.69($\\\\pm$1.14) | 56.39($\\\\pm$1.68) | **23.68($\\\\pm$1.57)** | 48.39($\\\\pm$0.66) |\\n| MFLVC | 73.92($\\\\pm$1.63) | 52.68($\\\\pm$1.43) | 32.41($\\\\pm$1.05) | 25.13($\\\\pm$0.67) | - | - | - |\\n| DSMVC | 73.79($\\\\pm$1.89) | 51.00($\\\\pm$1.28) | 30.14($\\\\pm$1.12) | 25.01($\\\\pm$0.22) | **56.85($\\\\pm$1.54)** | 21.01($\\\\pm$1.85) | **55.03($\\\\pm$1.86)** |\\n| Ours | **87.92($\\\\pm$0.00)** | **78.12($\\\\pm$0.00)** | **41.12($\\\\pm$0.00)** | **28.55($\\\\pm$0.00)** | 44.84($\\\\pm$0.00) | 20.64($\\\\pm$0.00) | 51.37($\\\\pm$0.00) |\\n\\n\\nAs seen, even against deep learning methods, our results are still comparable. \\n\\n\\n\\n\\n\\n\\n**Q3:** The authors should check the data since some methods, such as OrthNTF and GSC, since the performance of these new methods is very poor.\\n\\n**A3:** Thanks. We deeply apologize for causing any confusion to Reviewer 2FNR. We have carefully check these. The reason for this phenomenon is that the default hyper-parameter settings in the released code are inconsistent with that in the paper. We have corrected these parameters according to the guidance presented in their paper and reorganized the comparative experiments. Please check the result comparison table. We very sincerely thank Reviewer 2FNR for pointing this out, which significantly helps us improve the manuscript.\"}", "{\"metareview\": \"The paper introduces DLA-EF-JA, a multi-view clustering method designed to address challenges such as anchor misalignment, instability, and overlooked anchor relationships. By leveraging self-expression and topology learning, the method explores underlying data structures and constructs cluster labels directly using a binary strategy to enhance stability.\\n\\nThe reviewers provided mixed feedback, raising several concerns, including: (1) the work lacks sufficient novelty; (2) the absence of popular deep learning methods as baselines, which raises questions about the reliability of experimental results; (3) limited improvement in performance compared to existing methods. Based on these considerations, the paper is not recommended for acceptance at this time.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers' opinions remained unchanged.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
58KF6ne6d4
Kinematics-Informed Reinforcement Learning for Trajectory Optimization in CNC Machining
[ "Jin Zhang", "Mingyang Zhao", "XIN JIANG", "Dong-ming Yan" ]
Toolpath smoothing and feedrate planning are key techniques in Computer Numerical Control (CNC) machining, and play a significant role in machining accuracy, efficiency, and tool life. Traditional methods typically decouple path smoothing from feedrate planning, without considering the kinematic constraints during the smoothing process. As a result, the subsequent feedrate planning process is subject to more stringent kinematic limitations, which hinders the achievement of optimal speed execution. However, the integration of these two processes presents a significant challenge due to severe complexity and nonlinearity of the problem. Here, we propose a novel Reinforcement Learning (RL) based method, termed KIRL, to address the integrated optimization problem. Experimental results demonstrate that KIRL can generate smoother trajectories and optimize machining time compared to traditional decoupled methods. To our best knowledge, KIRL is the first RL-based method for solving the integrated toolpath smoothing and feedrate planning optimization problem in CNC machining.
[ "Trajectory Optimization", "Reinforcement Learning", "CNC Machining" ]
https://openreview.net/pdf?id=58KF6ne6d4
https://openreview.net/forum?id=58KF6ne6d4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t9Kc7fO7I0", "dBB1f74wcH", "X5jeOsCLM5", "BtmKvgcuNQ", "2eGldhcglh" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730669293788, 1730709585782, 1730541173997, 1731430702858, 1730041357408 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3325/Reviewer_hemx" ], [ "ICLR.cc/2025/Conference/Submission3325/Reviewer_uu6z" ], [ "ICLR.cc/2025/Conference/Submission3325/Reviewer_Tp6q" ], [ "ICLR.cc/2025/Conference/Submission3325/Authors" ], [ "ICLR.cc/2025/Conference/Submission3325/Reviewer_5bTc" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a reinforcement learning (RL) method called KIRL for integrated joint toolpath smoothing and feedrate planning in Computer Numerical Control (CNC) machining. The tool trajectories are divided into segments by a series of boundary points and quintic polynomial functions between them. The RL agent is trained for predicting kinematic states at the boundary points, which are then used for polynomial interpolation. The duration of each segment is separately optimized by maximizing the reward function. Experimental results demonstrate that KIRL can generate smoother trajectories and optimize machining time compared to traditional decoupled methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem this paper aims to solve, i.e., integrated toolpath smoothing and federate planning, is rooted in real-world manufacturing. It is important for improving machining accuracy, efficiency, and tool life. The proposed method of using RL for trajectory optimization is original, and its performance is better than traditional decoupled approaches. The writing of the paper is clear.\", \"weaknesses\": \"1. The paper does not convince me why the kinematic state prediction problem is an MDP. In particular, why does it have the Markov property? The authors defined an observation space, but what they need to define is a state space that makes the problem an MDP. From my understanding, some elements in the current observation space already violates the Markov property. For example, the segment length and turning angle in the next step cannot be determined by observation or action at the current step. The authors are suggested to explicitly justify how their formulation satisfies the Markov property, and to clarify the distinction between their observation space and the underlying state space of the MDP.\\n\\n2. The duration of each segment is computed by minimizing the reward function of that segment. This is not optimal because the objective is to minimize the total machining time, which should be a joint minimization on the sum of durations of all segments. From my understanding, the authors do separate optimization because solving future time duration requires future kinematic states, which are not available at the current step. If this is true, it reinforces the doubt whether the problem is an MDP because now the reward function also depends on future states. In addition, the authors are suggested to discuss the trade-offs of their approach versus joint optimization, and to clarify how their method approximates or relates to the global optimum.\\n\\n3. The authors mentioned that there are some recent studies formulating the integration of toolpath smoothing and federate planning as a holistic problem, but they did not explain how these methods solve the problem, nor did they compare the proposed method with them in the experiments. It is unclear whether and why the proposed method is superior to existing integrated methods. The authors are suggested to include a brief overview of how existing integrated methods work, and a comparative evaluation against at least one state-of-the-art integrated approach.\", \"questions\": \"1. Why is the kinematic state prediction problem an MDP? Why does it have the Markov property? What is the state space of this MDP?\\n2. Why is the duration of each segment optimized separately? Is it possible to jointly optimize the sum of durations of all segments? Is the reward function still well-defined in this case?\\n3. How do existing integrated methods solve toolpath smoothing and federate planning? Why is RL superior to these methods? How does the proposed method perform compared to these integrated methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a RL approach to improve smoothness and efficiency in following CNC machining toolpath. In CNC machining, the toolpath is defied as a G01 code which is a series of points. Path between intermediate points are straight line segments connecting them. The junctions introduce discontinuity in velocity and acceleration. Traditional approaches first smooth the trajectory, then adjust the tool path velocity to accommodate maximum velocity, acceleration and jerk constraints. This de-coupling can introduce inefficiencies. Hence, the authors propose a coupled optimization approach leveraging RL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Interesting idea of using RL to solve CNC routing problem which is traditionally solved by hand crafted algorithms\", \"weaknesses\": \"1. Interesting study but not enough evidence to show usefulness. Given the topic of the paper, it either demands real world results (e.g. higher quality CNC output, faster toolpath, etc) or very strong evidence in simulated results.\\n2. No real world results. Simulated results are also not very strong. E.g. authors mention in Line 43-45\\\"..decoupled approach often yields suboptimal results..limiting the achievable feedrate\\\". However, in 2/4 toolpaths, the proposed method generates slower toolpath than existing methods.\\n3. Given the additional time and complexity with RL based optimization, one would use it only if there is a strong reason, which is missing in the current paper.\", \"minor\": \"1. 140-141: udden->sudden\\n2. 157-158: I believe there is a missing bracket\", \"questions\": \"The time component of the optimization is removed when we move from Equation 5 to Equation 7, is it intentional and why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes reinforcement-learning based method to solve the integrated integrated toolpath smoothing\\nand feedrate planning problem in CNC machining. PPO and SAC are used to train RL agents to predict intermediate kinematic states. To generate the trajectory, the target is to perform an integrated optimization to find a trajectory that minimizes a weighted sum of both the trajectory jerk (related to smoothing of the path) and the machining time, taking into account kinematic constraints. Then RL finds intermediate kinematic states at the path segment boundaries. The method is evaluated on four tool paths and shows generally better performance than the benchmark methods used in the evaluation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Applying RL-based methods for improvement of trajectory tracking in CNC machining is novel for this applicaiton domain. The approach is original and clearly explained in the paper. I appreciate the provided algorithm and the paper's presentation in general.\", \"weaknesses\": \"1) The authors solve the simultaneous problem of smoothing a path given by linear segments and optimizing the feed rate (velocity) of the CNC machine. In their literature study, they omit existing work on this problem (see references). The problem can be seen in two different ways: 1) improving the performance of the system, by improving both the time duration and the position error; and 2) solving mathematically the smoothing problem together with feed rate optimization, which is the preferred one in this paper. For both problems, there is relevant literature which is omitted. I provide a couple of references, addressing both. It would be useful to include that in the literature review.\\n\\nZhang, Y., Wang, T., Peng, P., Dong, J., Cao, L. and Tian, C., 2021. Feedrate blending method for five-axis linear tool path under geometric and kinematic constraints. International Journal of Mechanical Sciences, 195, p.106262.\\n\\nLiu, B., Xu, M., Fang, J. and Shi, Y., 2020. A feedrate optimization method for CNC machining based on chord error revaluation and contour error reduction. The International Journal of Advanced Manufacturing Technology, 111, pp.3437-3452.\\n\\nKim, H. and Okwudire, C.E., 2020. Simultaneous servo error pre-compensation and feedrate optimization with tolerance constraints using linear programming. The International Journal of Advanced Manufacturing Technology, 109, pp.809-821.\\n\\nA. Rupenyan, M. Khosravi and J. Lygeros, \\\"Performance-based Trajectory Optimization for Path Following Control Using Bayesian Optimization,\\\" 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 2116-2121, doi: 10.1109/CDC45484.2021.9683482.\\n\\n\\n2) Even when the preferred way of addressing the performance problem is somoothing+feed rate optimization, some quantification of the tracking error is missing. It would be useful to see what is the effect of the approach on the positioning performance.\\n\\n3) While there is a comparison with some approaches treating the feed rate and the path generaiton separately, there is no comparison with approaches treating the problems jointly (see references above). If such a comparison is added, I would be willing to increase my rating of the paper.\\n\\n4) There is no quantification or discussion of the computational performance of the method. Is it intended to be used offline?\", \"questions\": \"1. In the simulations, somethimes the KIRL-PPO shows better results, and sometimes the KIRL-SAC. Are there any criteria to decide when to use PPO or SAC?\\n\\n2. Does the method guarantee that there is no constraint violation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes to apply reinforcement learning to jointly solve the feedrate planning and path smoothing problem for CNC machines.\\nThe paper proposes a formulation for the problem, including reward function, state space, and action space/policy parametrization. The proposed approach is compared in simulation on four 2D trajectories against non-RL baselines, and shown to outperform those.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is generally well written and does a good job at explaining the problem. The paper tackles and important, practical problem in CNC. The novel method is presented clearly. The method seems correct and sound. The experiments show good results compared to the baselines.\", \"weaknesses\": \"While the paper tackles and important practical problem, I did not see any methodological novelty. It is an application of very standard RL algorithms to a novel problem. The specific formulation of the optimization objective/reward function, state space, and action space/policy representation seem novel, but I could not extract any more general lessons learned from them. There is nothing per se wrong with the design choices made and the resulting overall system seems to work. However, there are quite a few choices where alternatives would be possible and insights into why certain choices were made are lacking - I would have expected at least some ablations in the experiments. As the paper itself points out, in its current form the results are very far from practical applicability and rather a proof of concept.\\nI also have a few doubts about details of the method and paper, and about the evaluation. See questions below.\\n\\nTo sum up, overall there is neither an algorithmic contributions, nor sufficient general insights for a systems paper. The experimental results also do not comply with basic standards for RL papers.\", \"questions\": [\"A motivation is missing why this is treated as a sequential decision making problem rather than a global optimization problem. I'd assume better global performance can be achieved when optimizing over the whole trajectory rather than considering the past as fixed and only considering the next straight segment.\", \"So more generally: Why is the optimization done per segment? In the initial parts with the optimization problem that can lead to arbitrarily bad performance on subsequent segments, in the RL formulation this is accounted for as it optimizes for the cumulative reward.\", \"Throughout the paper costs/constraints/optimization objectives are mixed in terms of \\\"global\\\" (whole trajectory) and \\\"local\\\" per segment, e.g. the ones in Eq (3) vs (4). That makes it rather hard to follow.\", \"The constraints and objectives change throughout the paper - which makes it rather hard to follow. In Sect. 2.2 we have constraints on velocity, acceleration and jerk, in Sect. 3.1 that becomes a minimization of the jerk (while complying to the constraint on chord error, which intuitively would mean the jerk gets minimized at the cost of reaching the chord error limit), and then in Sect. 3.3 for the reward function we get a weighted combination of minimizing the chord error and violations of the jerk limit.\", \"How are constraints ensured when learning? In the reward function constraint violations just seem to be modeled as costs, so there is no guarantee that they don't get violated, i.e., soft constraints rather than hard constraints.\", \"Sect. 3.1 and 3.2: While quintic polynomial functions can indeed solve the problem for the constraints as defined by the authors, it remains very vague why that is the best idea. As far as I understood with this formulation, perfectly straight lines (which would be desirable for many mechanical parts) are not possible, and the optimized path can be quite far of. Wouldn't a higher order representation (or a different kind of spline, joining straight lines with transition 'pieces' at the corners, etc.) allow to follow the desired path more accurately? More generally: what are the implications/consequences of the design choices?\", \"Sect. 3.2 / Fig. 3: The result that the slower the CNC moves the more distorted paths we get is highly counterintuitive. If the objective is to avoid the limits/constraints then simply moving slower should reduce all three velocity, acceleration, and jerk. This seems to be more an artifact of keeping the boundary conditions fixed. And I also don't believe the trade-off (longer time = more traj distortion but greater velocity 'smoothness') is general - already in your plot we see that when going from T=4 to T=5 the trajectory gets more distorted, but also the velocity peak on the right (t = 3.2 and 3.7) becomes worse (i.e., the trajectory needs to accelerate drastically towards the end to achieve the boundary constraints), which seems to invalidate the claim in the paper. My feeling is that the effects will depend quite a bit on the combination of boundary conditions that the range of T you consider, and that drawing the conclusion about the trade-off based on a single example/figure isn't warranted. Simple example: Boundary conditions and T are chosen in a way that the path is a simple straight line with constant velocity (and zero acceleration and zero jerk) so zero chord error, then both increasing and decreasing T will require some non-zero acceleration and potentially require path deviations.\", \"Eq (11): What is the purpose of having N-i as part of the state, but no indication of which segment we are currently in?\", \"Eq (12): p^L and p^R are shown in Fig 2 but don't seem to be explained in the text. Nor does it become clear how far away they can be from p. \\\\delta_max? But then how do you ensure that the curved segment in between doesn't protrude outside that limit?\", \"l. 295: I assume the absolute value of the jerk j_i(t) should be used in r_i^jerk\", \"Algo 1: only shows policy execution, I think it would be interesting to also show the training procedure\", \"Sect. 4.1: \\\"due to unavailability of baseline implementations\\\" for PPO and SAC there are quite a few implementations available (e.g. Stable Baselines), or do you mean something else?\", \"Sect. 4.2: Why is the chord error not evaluated/reported? I think this would be crucial to see at which accuracy cost the improved other metrics come. E.g. in the Fig 5 inset, the path error seems to have increased quite a lot - if we want to have accurate points.\", \"I'm not from the CNC field, but are these decorative shapes really representative for many tasks? I'd assume for more technical applications accurate straight lines and sharp corners are actually crucial.\", \"Table 1: It is unclear what we see here. Single RL run? For RL papers it is crucial to report statistics over several runs (mean + std).\", \"Sect. 4.3: I really would have liked to see some ablations on your method (rather than only 2 different RL algorithms) and sensitivity analysis on the various parameters there are in the approach (e.g. reward weights).\", \"Sect. 4.4: The state and action space was designed with generalization in mind, now figuring out that it needs to be retrained for all paths after all is disappointing. Related to this, it would have been nice to see some results on the performance without retraining.\", \"A few typos, e.g. l. 140 \\\"udden\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
58AhfT4Zz1
Causal-aware Graph Neural Architecture Search under Distribution Shifts
[ "Peiwen Li", "Xin Wang", "Zeyang Zhang", "Yijian Qin", "Ziwei Zhang", "Jialong Wang", "Yang Li", "Wenwu Zhu" ]
Graph neural architecture search (Graph NAS) has emerged as a promising approach for autonomously designing graph neural network architectures by leveraging the correlations between graphs and architectures. However, the existing methods fail to generalize under distribution shifts that are ubiquitous in real-world graph scenarios, mainly because the graph-architecture correlations they exploit might be spurious and varying across distributions. In this paper, we propose to handle the distribution shifts in the graph architecture search process by discovering and exploiting the causal relationship between graphs and architectures to search for the optimal architectures that can generalize under distribution shifts. The problem remains unexplored with the following critical challenges: 1) how to discover the causal graph-architecture relationship that has stable predictive abilities across distributions, 2) how to handle distribution shifts with the discovered causal graph-architecture relationship to search the generalized graph architectures. To address these challenges, we propose a novel approach, Causal-aware Graph Neural Architecture Search (CARNAS), which is able to capture the causal graph-architecture relationship during the architecture search process and discover the generalized graph architecture under distribution shifts. Specifically, we propose Disentangled Causal Subgraph Identification to capture the causal subgraphs that have stable prediction abilities across distributions. Then, we propose Graph Embedding Intervention to intervene on causal subgraphs within the latent space, ensuring that these subgraphs encapsulate essential features for prediction while excluding non-causal elements. Additionally, we propose Invariant Architecture Customization to reinforce the causal invariant nature of the causal subgraphs, which are utilized to tailor generalized graph architectures. Extensive experiments on synthetic and real-world datasets demonstrate that our proposed CARNAS achieves advanced out-of-distribution generalization ability by discovering the causal relationship between graphs and architectures during the search process.
[ "Graph Neural Architecture Search", "Out-of-Distribution Generalization", "Causal Learning" ]
Reject
https://openreview.net/pdf?id=58AhfT4Zz1
https://openreview.net/forum?id=58AhfT4Zz1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wsLzEu0AOt", "ryg6et74ho", "jP9OFjcQUa", "igAnz2jl1p", "g74dT1UVCi", "bibIZ4uDxI", "bLSe96sa2w", "aMh7mwVUWE", "YHx3t5fbdp", "RRWIBHjjnh", "MDC9BtHy4H", "M2r1D7u8ng", "ERNWXE4oHh", "BmrlKevr9k", "BkXJij7iSy", "7c3F4SXuAG", "6IuSAT6Vnw", "5ecDC42JDd" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1732622802573, 1732620536310, 1732617604103, 1732615902324, 1732619036974, 1730705660857, 1732622655558, 1732621728599, 1732618179068, 1729759499677, 1733614486502, 1733311027278, 1732621715514, 1729754289753, 1733313407946, 1737524007881, 1732620748001, 1730601614157 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Reviewer_n9L7" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Reviewer_EUwC" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Reviewer_EUwC" ], [ "ICLR.cc/2025/Conference/Submission9825/Area_Chair_ajR6" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Reviewer_Q7m9" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9825/Authors" ], [ "ICLR.cc/2025/Conference/Submission9825/Reviewer_Zb28" ] ], "structured_content_str": [ "{\"comment\": \"> Q1. Could you conduct an ablation study focusing on the neural network search details provided in the added Appendix C.1? Specifically, I'm interested in understanding:\\n> \\n> - The effectiveness of different backbones to OOD distribution shifts, possibly illustrated through weight distributions.\\n> - How about time and memory requirements during the search process?\\n> - Are all of the backbones crucial for effective OOD distribution handling?\\n\\nA4. Thank you for your question. As shown in `Tables 1, 2, and 7`, we compare the performance of CARNAS with each of the six backbones\\u2014GCN, GAT, GIN, SAGE, GraphConv, and MLP\\u2014that we adopted as candidate operations, as described in `Appendix D.1`. Our method consistently outperforms these backbones individually, which serves as an ablation study demonstrating the effectiveness of integrating backbones through the neural architecture search (NAS) process. Below, we address your specific questions in detail:\\n\\n- A4-1. Regarding the effectiveness of different backbones/operations under distribution shifts: The case study in `Appendix G` addresses this issue. For graphs with different motif structures (causal subcomponents), we provide the learned operation probabilities for each layer (in expectation) in `Figure 7`. These probabilities, *i.e., weight distributions*, reveal that different graph types (characterized by diverse motif shapes) prefer distinct architectures. Notably, each of the six backbones is selected as the most suitable operation for at least one graph\\u2019s one layer, emphasizing their necessity. Furthermore, our approach is flexible and can accommodate additional backbones for specific tasks or datasets. By integrating diverse operations and tailoring architectures to match graphs with varying causal subparts and features, our NAS-based CARNAS significantly enhances OOD generalization.\\n- A4-2. Regarding time and memory requirements during the search process: The architecture search module is jointly optimized alongside other modules, so we report the statistics for the whole process. As reported in `Table 8 & 9` and above rebuttal answer `A1.`, the **overall time and memory costs for CARNAS are competitive with those of non-NAS-based (fixed-network) graph OOD methods**. This efficiency is achieved by simultaneously searching for the architecture and learning its parameters. The practical time and memory efficiency of CARNAS makes it a viable and effective choice. Thus, we experimentally verify that the proposed CARNAS addresses distribution shifts during the NAS process from a causal perspective.\\n- A4-3. Concerning the necessity of all six backbones for effective OOD handling: As illustrated in `Figure 7`, none of the six backbones exhibits a zero probability of being selected. This indicates that each backbone has the potential to be preferred in certain contexts, making them indispensable as candidate operations in OOD scenarios. Their diverse contributions are crucial for achieving the strong generalization capabilities of CARNAS on OOD datasets.\\n\\nWe extend our sincere thanks again for your invaluable feedback and thoughtful consideration.\"}", "{\"comment\": \"We would like to express our sincere appreciation to the reviewer for providing us with detailed comments and suggestions. We have carefully reviewed each point and respond to the reviewer\\u2019s comments point by point as follows.\\n\\n> W1. *\\u201cI believe the paper does not clearly explain why NAS can help adjust GNNs to identify causal information, which I consider the main issue of the paper. In my view, NAS optimizes the structure of GNNs, enhancing their efficiency or expressiveness, but it does not inherently enable GNNs to determine what type of data to model. At the very least, the authors did not provide a clear explanation of this point in the paper.\\u201d*\\n> \\n> \\n> Q1. Could you explain how NAS can guide GNNs to model specific data causal relationships?\\n> \\n\\nA1. Thank you for your question. We believe there is a misunderstanding regarding the target problem of this work. We would like to clarify that this work utilizes causality to enhance NAS rather than employing NAS to identify causal information. We apologize for the confusion and will further clarify this in the revised manuscript.\\n\\nTo be specific, we frame the entire predicting pipeline in graph task as graph $\\\\rightarrow$ architecture $\\\\rightarrow$ label. Our work aims to **unveil and utilize the *causal graph-architecture relationship* to address distribution shifts during the graph neural architecture search (NAS) process**. This is an **under-explored yet important issue**, as previous works [1-3] emphasize the complex relationship between graph data and the optimal architecture. However, prior methods fix the network structure and solely focus on the *graph-label relationship*, neglecting the influence of the architecture itself, which can lead to suboptimal results under distribution shifts.\\n\\nAs you mentioned, NAS optimizes the structure of GNNs (based on graph data), and this process provides an opportunity to delve into the graph-architecture relationship. Therefore, we incorporate the architecture search process to improve generalization by identifying and leveraging the causal graph-architecture relationship to construct the optimal architecture (denoted as $A_c$ in our paper) for graph data.\\n\\nSpecifically, in our methods:\\n\\n1. The causal subgraph $G_c$, extracted from the *Disentangled Causal Subgraph Identification module*, serves as a carrier of information causally relevant to the optimal architecture, rather than being merely causally related to the label $\\\\hat{Y}$ as in prior approaches.\\n2. In the *Invariant Architecture Customization module*, we customize the optimal architecture $A_c$ using $G_c$ and simulated intervention architectures ${A_v}_j$s using intervention graphs. The loss term $\\\\mathcal{L}\\\\_{arch}$ for ${A_v}_j$s is used to **regulate the influence of spurious components on the construction of the optimal architecture**. This allows us to identify and leverage causal graph-architecture relationships to build the optimal architecture.\\n\\nIn conclusion, we first propose to\\u00a0**jointly optimize causal graph-architecture relationship and architecture search**\\u00a0by offering an\\u00a0**end-to-end**\\u00a0training strategy for extracting and utilizing causal relationships between graph data and architecture, which is stable under distribution shifts, during the architecture search process, thereby enhancing the model\\u2019s capability of OOD generalization.\\n\\n> W2. The paper lacks theoretical justification for the regulatory capability of NAS.\\n> \\n\\nA2. Thank you for your comment. While our work primarily focuses on the empirical effectiveness of Neural Architecture Search (NAS), it is worth noting that NAS has been widely studied[4-8] for its ability to regulate and optimize neural network performance across diverse tasks. \\n\\nMoreover, we have included additional **theoretical analysis about the problem to further illustrate our method in a more rigorous way**. The added theoretical analysis is highlighted in blue in `Appendix C` in the **updated [`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**.\\n\\n---\\n\\n[1] You et al. Design space for graph neural networks. NeurIPS2020\\n\\n[2] Corso et al. Principal neighbourhood aggregation for graph nets. NeurIPS2020\\n\\n[3] Xu et al. How neural networks extrapolate: From feedforward to graph neural networks. arXiv2020\\n\\n[4] DARTS: Differentiable Architecture Search. *ICLR 19.*\\n\\n[5] Auto-GNN: Neural Architecture Search of Graph Neural Networks. *2019*.\\n\\n[6] Graph Neural Architecture Search. *IJCAI 20*.\\n\\n[7] Automated Machine Learning on Graphs: A Survey. *IJCAI 21*.\\n\\n[8] One-shot Graph Neural Architecture Search with Dynamic Search Space. *AAAI 21*.\"}", "{\"comment\": \"> Q1. In Line 113,\\u00a0$N_{tr}$\\u00a0and\\u00a0$N_{te}$\\u00a0are not explicitly defined, though their meanings seem clear from the context. Please clarify these terms in the final manuscript.\\n> \\n\\nA3. Thank you for pointing this out. In the updated version, we have clarified that $N_{tr}$ and $N_{te}$ represent the number of graph instances in the training set and testing set, respectively. \\n\\n> Q2. How are the subgraphs in Eq. 6 represented (e.g., soft edge mask or hard crop)? Where and how are they used in subsequent steps?\\n> \\n\\nA4. Thank you for your question. In Eq. 6, the subgraphs are extracted using a hard crop on edges to identify the causal subgraph $G_c$, while simultaneously separating out spurious subgraphs ${G_s}_j,~j \\\\in [1, N_s]$.\\n\\nSpecifically, in our method: The causal subgraph $G_c$, extracted from the *Disentangled Causal Subgraph Identification module*, serves as the carrier of information causally relevant to the optimal architecture. Using the *Graph Embedding Intervention module*, spurious subgraphs ${G_s}_j$s are employed to generate intervention graphs ${G_v}_j$s (with graph representations $\\\\mathbf{H_v}_j$ ) by intervening on $G_c$. In the *Invariant Architecture Customization module*, the optimal architecture $A_c$ is customized with $G_c$ and simulated intervention architectures ${A_v}_j$s are customized with intervention graphs. The loss function $\\\\mathcal{L}\\\\_{arch}$ for ${A_v}_j$s can **regulate the influence of spurious parts on the construction of the optimal architecture**. Through this process, both causal and spurious subgraphs are integral to identifying and leveraging causal graph-architecture relationships for constructing the optimal architecture.\\n\\n> Q3. To enhance the clarity of your approach, it would be helpful to visualize the causal and non-causal subgraphs for each dataset used in the case studies.\\n> \\n\\nA5. Thank you for your suggestion! Following your advice, we have visualized the causal subgraphs for each dataset used in the case study in `Appendix G, Figure 8`. These visualizations illustrate the learned graph-architecture relationships effectively. Please refer to the **updated [`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)** for more details. \\n\\n \\n[1]Representation learning: A review and new perspectives.\\n\\n[2]Disentangled representation learning.\"}", "{\"comment\": \"We would like to express our sincere appreciation to the reviewer for providing us with detailed comments and suggestions. We have carefully reviewed each comment and offer the following responses:\\n\\n>W1. Clarity in Sec 3.3: Given I\\u2019m having limited familiarity with Graph NAS, the dynamic graph neural network architecture production and optimization process described in Sec 3.3 remains somewhat unclear for me. A visual representation and a more detailed explanation would significantly improve the paper's readability.\\n\\nA1. Thank you for your valuable feedback. We aim to intuitively illustrate the architecture production and optimization process below, referring to `Figure 1`, which visualizes Section 3.3, along with the **algorithm** in `Appendix B`.\\n- Production: Given a graph embedding, such as $\\\\mathbf{H_c}$, our goal is to construct an architecture $A$ (with $K$ layers, each containing $|\\\\mathcal{O}|$ candidate operators). For example, in `Figure 1`, the architecture consists of 2 layers, each with 3 candidate operators. We represent the architecture $A$ as a matrix $\\\\mathbf{A} \\\\in \\\\mathbb{R}^{K \\\\times |\\\\mathcal{O}|}$, where $\\\\mathbf{A}_{k,u} = \\\\alpha_u^k$ is the mixture coefficient of operator $o_u(\\\\cdot)$ in layer $k$, denoting the importance of different operators in different layers. In the figure, this coefficient is visualized as arrows of varying shades in the architecture. The constructed architecture $A$ is the result of combining these coefficients with the specific operators, as shown in Eq. 11. Additionally, we use a prototype vector $\\\\mathbf{op}_u^k$ to represent an operator in a specific layer, allowing us to learn the coefficient $\\\\alpha_u^k$ based on both $\\\\mathbf{op}_u^k$ and $\\\\mathbf{H_c}$.\\n- Optimization: Using the production process described above, we customize the optimal architecture $A_c$ for $G_c$ and simulate intervention architectures ${A_v}j$s for intervention graphs. We use $A_c$ as the optimal architecture for the graph instance to output the final label prediction and apply $\\\\mathcal{L}_{arch}$ to ${A_v}_j$s to **regulate the influence of spurious components when constructing the optimal architecture**. This approach identifies and leverages the causal relationships between graphs and architectures to construct the optimal architecture.\\n\\n>W2. Causal-Aware Solution's Justification: While the paper presents a causal-aware solution for handling distribution shifts, some aspects require stronger theoretical support to underscore the novelty and significance of the approach.\\n\\n>W2-1. More Theoretical Support. \\n\\nA2-1. Thank you for your feedback. Although the rebuttal time is limited, we have attempted to provide a detailed theoretical analysis of the problem to further illustrate our method in a more rigorous manner. The added theoretical analysis is highlighted in blue in `Appendix C` in the **updated [`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**.\\n\\n>W2-2. Reliability of Causal Subgraph in Latent Space.\\n\\nA2-2. Thank you for your comment. Representing causal subgraphs and performing interventions in the latent space allow us to construct architectures based on the intervened latent features. Following your advice, the added theoretical analysis also reflects how our method obtains the causal parts. Additionally, we visualize the edge importance for selecting causal subgraphs in each graph in `Figure 8` and the searched architectures for them in `Figure 7` (`Appendix G`), which better demonstrate the learned causal graph-architecture relationships.\\n\\n>W2-3. Difference with PGExplainer.\\n\\nA2-3. Thank you for your comment. While PGExplainer [26] aims to identify explanatory subgraphs based on edge features\\u2014a common step in various graph tasks\\u2014we would like to clarify the key differences between our method and PGExplainer.\\n\\nIn the **Disentangled Causal Subgraph Identification** module, we introduce the concept of disentangled learning [1,2] to represent distinct latent factors as unique vector representations. This approach allows us to comprehensively disentangle causal latent graph structural features from spurious ones, enabling a more precise identification of causal subgraphs that influence architecture construction. Together with the subsequent modules, our method effectively **disentangles complex graph-architecture relationships**, learning distinct latent factors that guide the customization of optimal architectures from intricate graph structure features.\\n\\nTo summarize, CARNAS **differs in both objective and methodology**: 1. Objective and Motivation: PGExplainer focuses on extracting explanatory subgraphs, while our method aims to disentangle causal features from spurious ones and extract causal subgraphs for architecture optimization. 2. Mechanism: Instead of merely selecting important edges based on graph features, we leverage a disentangled layer to separate causal features from spurious ones, offering a fundamentally different mechanism for subgraph identification.\"}", "{\"comment\": \"> Q1. In the graph embedding intervention module, you use $\\\\mu$ to control intervention intensity in Eq.(10):\\u00a0$\\\\mathbf{H_v}_j = (1-\\\\mu)\\\\cdot{\\\\mathbf{H_c}} + \\\\mu\\\\cdot{\\\\mathbf{H_s}}_j$. Have you considered using adaptive intervention strategies where\\u00a0$\\\\mu$ varies based on the structural properties of\\u00a0$G_c$ and $G_s$? This could potentially better handle graphs with varying degrees of spurious correlations.\\n> \\n\\nA4. Thank you for bringing up this insightful point! We acknowledge that your idea makes great sense, especially for modeling the intervention graph embedding. Discovering causal graph-architecture relationships through graph properties and applying varying levels of intervention based on these properties aligns well with our approach. Since our main focus is still on the overall process of neural architecture search, your suggested method could potentially improve the performance further. We would like to explore and incorporate this idea in future.\\n\\n> Q2. The overall objective function (Eq.(17)) uses a linearly growing\\u00a0$\\\\sigma_p$ corresponding to epoch number. Could you elaborate on why linear growth was chosen over other schedules (e.g., exponential, step-wise)? How does the schedule of\\u00a0\\u00a0$\\\\sigma_p$ affect the trade-off between causal structure learning and architecture optimization?\\n> \\n\\nA5. Thank you for your detailed question. Actually, we choose linear growth just for simplicity and other methods you mentioned can also be adopted, yet the choice of scheduling method may not significantly impact overall performance. The purpose of adopting this $\\\\sigma_p$ schedule is to enhance training efficiency by dynamically adjusting the focus of training in each epoch\\u2014emphasizing the causal-aware part (i.e., identifying suitable causal subgraphs and learning vectors of operators) in the early stages, and prioritizing the performance of the customized super-network in the later stages.\\n\\nWe further report both the training loss and validation loss of the two components ($\\\\mathcal{L}\\\\_{causal}$ and $\\\\mathcal{L}\\\\_{pred}$) with and without the dynamic $\\\\sigma_p$ schedule in **`Figure 5`**. This analysis verifies and illustrates the impact of $\\\\sigma_p$ on the trade-off between causal structure learning and architecture optimization. The *newly added detailed analysis is highlighted in blue* in `Appendix E.3` in the **updated [`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**.\\n\\n[1] Discovering invariant rationales for graph neural networks, ICLR 22.\\n\\n[2] Learning Invariant Graph Representations for Out-of-Distribution Generalization, NeurIPS 22.\"}", "{\"summary\": \"The paper proposes a novel method, Causal-aware Graph Neural Architecture Search (CARNAS), to enhance the generalizability of Graph Neural Network (GNN) architectures under distribution shifts. By discovering stable causal relationships between graph structures and GNN architectures, CARNAS aims to mitigate issues with spurious correlations that often degrade performance across varying distributions. CARNAS introduces three core modules: Disentangled Causal Subgraph Identification, Graph Embedding Intervention, and Invariant Architecture Customization. Experimental results on both synthetic and real-world datasets show significant performance gains, especially in out-of-distribution generalization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Well-structured modular approach: The CARNAS framework is thoughtfully organized, with each component clearly contributing to improved generalization under distribution shifts.\\n\\n2. Robust experimentation: The paper includes extensive experiments across synthetic and real-world datasets, highlighting the robustness of the proposed method.\\n\\n3. Component-level contribution clarity: Each module\\u2019s individual contribution is demonstrated, providing transparency and supporting the effectiveness of the approach.\", \"weaknesses\": \"1. Clarity in Section 3.3: Given I\\u2019m having limited familiarity with Graph NAS, the dynamic graph neural network architecture production and optimization process described in Section 3.3 remains somewhat unclear for me. A visual representation and a more detailed explanation would significantly improve the paper's readability.\\n\\n2. Causal-Aware Solution's Justification: While the paper presents a causal-aware solution for handling distribution shifts, some aspects require stronger theoretical support to underscore the novelty and significance of the approach:\\n\\n2.1. Limited Theoretical Support: The causal-aware Graph NAS solution leans heavily on implementation specifics, which limits the theoretical grounding of the method and may impact the perceived novelty.\\n\\n2.2. Reliability of Causal Subgraph in Latent Space: The representation of causal subgraphs in latent space is an interesting approach; however, it is not entirely clear if the model reliably learns the true causal components or overfits to the training set to optimize the objective.\\n\\n2.3. Overlap with Prior Work: Section 3.1 closely mirrors aspects of PGExplainer [26], which limits the novelty of this part of the approach.\", \"questions\": \"1. In Line 113, $N_{tr}$ and $N_{te}$ are not explicitly defined, though their meanings seem clear from the context. Please clarify these terms in the final manuscript.\\n\\n2. How are the subgraphs in Eq. 6 represented (e.g., soft edge mask or hard crop)? Where and how are they used in subsequent steps?\\n\\n3. To enhance the clarity of your approach, it would be helpful to visualize the causal and non-causal subgraphs for each dataset used in the case studies.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> W2-1. *\\\"Limited Experimental Validation: While DIR is included as a baseline, other important causal subgraph-based methods from recent works (such as [1],[2]) are not considered.\\\"*\\n> \\n\\nA2. Thank you for your comment. In our main paper, we compared our model with two non-NAS-based graph OOD methods, ASAP and DIR. However, we have also expanded our evaluation to include **13 well-known non-NAS-based graph OOD methods**, *encompassing all the methods you mentioned ([1] as CIGA and [2] as GIL)*. \\n\\nThe results, provided in `Table 7 & 9`, demonstrate that **CARNAS not only performs exceptionally well among NAS-based methods but also significantly outperforms non-NAS-based graph OOD methods.** This improvement highlights the effectiveness of CARNAS in discovering and leveraging stable causal graph-architecture relationships during the neural architecture search process.\\n\\n> W3. Novelty discussion beyond the architecture search component.\\n> \\n\\nA3. Thank you for your comment. \\n\\nWe would like to emphasize that the main novelty and contribution of our work lies in **unveiling and utilizing the *causal graph-architecture relationship* to address distribution shifts during the graph neural architecture search (NAS) process.** This is an **under-explored yet crucial issue**, as previous works ([1-3]) have highlighted the complex relationship between graph data and optimal architectures. They suggest that *different subparts of a graph instance with varying features may be suited to different architectures.*\\n\\nOn this basis, we **first propose to\\u00a0jointly optimize causal graph-architecture relationship and architecture search**\\u00a0by offering an\\u00a0**end-to-end**\\u00a0training strategy for extracting and utilizing causal relationships between graph data and architecture, which is stable under distribution shifts, during the architecture search process, thereby enhancing the model\\u2019s capability of OOD generalization.\", \"we_further_clarify_the_differences_from_existing_methods\": \"- In Disentangled Causal Subgraph Identification module, we introduce the ideology of *disentangled learning*[4,5] to delineates distinct latent factors as unique vector representations, thus comprehensively unveil and disentangle causal latent graph structural features from spurious features that influence constructing architecture. Together with following two modules, it\\u00a0***disentangles complex graph-architecture relationships***\\u00a0by learning distinct latent factors, influencing the customization of optimal architecture, from complicated graph structure features. Thereby, it enables a more precise extraction of causal subgraph which is leveraged to derive the optimal architecture.\\n- In Graph Embedding Intervention module, we perform interventions on causal subgraph $G_c$ with non-causal subgraphs in latent space as Eq.10 to obtain intervention graphs, that is essential for the next Invariant Architecture Customization module. We customize the optimal architecture $A_c$ with $G_c$ and other simulated intervention architectures with intervention graphs, using $\\\\mathcal{L}_{arch}$ for simulated intervention architectures to regulate the influence of spurious parts in constructing the optimal architecture. This approach helps form a stable architecture solely based on the causal subgraph. The ***intervention on not just graph but especially further on architecture, makes our method significantly different from previous work.***\\n\\nTo summarize, our approach **differs in both objective and methodology.**\\n\\n---\\n\\n[1]Design space for graph neural networks. NeurIPS20\\n\\n[2]Principal neighbourhood aggregation for graph nets. NeurIPS20\\n\\n[3]How neural networks extrapolate: From feedforward to graph neural networks. arXiv20\\n\\n[4]Representation learning: A review and new perspectives.\\n\\n[5]Disentangled representation learning.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for the answers, yet it still confuses me how enhancing NAS helps in handling distribution shifts\"}", "{\"comment\": \"We would like to express our sincere gratitude to the reviewer for providing us with detailed comments and insightful questions. We have carefully considered the reviewer's feedback and would like to address each point as follows:\\n\\n> W1. While the paper shows good performance on the tested datasets, it lacks a detailed analysis of computational complexity and memory requirements. Specifically, the time complexity\\u00a0could become prohibitive for very large graphs. The authors should discuss how their method performs on graphs with millions of nodes and edges, which are common in real-world applications like social networks.\\n> \\n\\nA1. Thank you for your comment. We provide a detailed analysis of computational complexity and memory requirements in `Appendix E.2, E.4, and F`. Specifically, as shown in `Table 6`, CARNAS consistently requires less time while achieving superior performance compared to the best-performing NAS baseline, demonstrating its enhanced efficiency and effectiveness. Additionally, `Table 8 & 9` shows that **CARNAS's overall time and memory costs are competitive with non-NAS-based (fixed-network) graph OOD methods**, achieved by **simultaneously searching for architectures and learning parameters**. The practical time and memory efficiency of CARNAS makes it a viable and effective choice. We would also like to clarify that our work, like many previous Graph OOD studies [1,2], focuses on the task of graph classification, where the size of each graph instance does not typically contain millions of nodes and edges. Graphs with millions of nodes and edges, as you mentioned, are usually used for node classification (or link prediction). The task of node classification can be viewed as graph classification based on the ego-graphs of a node, where the K-hop neighbor graph of each node serves as a graph instance that are significantly smaller than the original whole graph, allowing our model to generalize to node classification with comparable complexity to traditional methods.\\n\\n> W2. The method requires careful tuning of four critical hyperparameters, which may significantly impact performance. In particular, the edge importance threshold $t$ in Eq.(6) and the intervention intensity\\u00a0$\\\\mu$ in Eq.(10) show high sensitivity in experiments. While the authors provide some sensitivity analysis on BACE dataset, they don't fully explain how to effectively tune these parameters for new datasets or application domains.\\n> \\n\\nA2. Thank you for your comment. The edge importance threshold $t$ is used to predefine the ratio of extracting causal subgraphs from original graph instance and\\u00a0$\\\\mu$ is used to predefine the intervention intensity. The values of these two hyper-parameters can be roughly determined based on domain knowledge relevant to the dataset. In practical, we tune the parameters by giving a recommended range for each of them, and conduct hyper-parameter optimization on validation set. The range for SPMotif is $t\\\\in [0.5,0.85], \\\\mu\\\\in[0.2,0.5], \\\\theta_1\\\\in[0.3,0.5], \\\\theta_2\\\\in[0.005,0.015]$ and for OGBG-Mol* is $t\\\\in [0.4,0.7], \\\\mu\\\\in[0.4,0.7], \\\\theta_1\\\\in[0.1,1.0], \\\\theta_2\\\\in[0.001,0.020]$.\\n\\n> W3. The paper lacks formal theoretical guarantees for the causal discovery process. While the empirical results are strong, the authors should clarify under what conditions their method is guaranteed to identify true causal relationships and provide bounds on the probability of discovering spurious correlations. Additionally, the relationship between the intervention loss and causal invariance could be more rigorously established.\\n> \\n\\nA3. Thank you for your invaluable advice! Although the rebuttal time is limited, we have tried to provide a detailed theoretical analysis to further illustrate our method in a more rigorous way. **The added theoretical analysis is highlighted in blue in `Appendix C` in the updated [`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**. It step by step introduces how we propose to **jointly optimize the causal graph-architecture relationship and architecture search** by offering an **end-to-end** training strategy. This strategy enables the extraction and utilization of causal relationships between graph data and architecture, which remain stable under distribution shifts during the architecture search process, thereby enhancing the model\\u2019s capability for OOD generalization.\"}", "{\"summary\": \"The study proposes a novel method called Causal-aware Graph Neural Architecture Search (CARNAS) to address the challenges posed by distribution shifts in the process of Graph Neural Architecture Search (Graph NAS). Existing methods face limitations when handling distribution shifts in real-world scenarios, as the correlations they exploit between graphs and architectures are often spurious and subject to variation across different distributions. CARNAS aims to discover and leverage the causal relationship between graphs and architectures to search for optimal architectures capable of maintaining generalization under distribution shifts.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper innovatively proposes using NAS to address the problem of causal information identification in graph data.\\n2. The paper conducts extensive experiments to validate the proposed method.\", \"weaknesses\": \"1. I believe the paper does not clearly explain why NAS can help adjust GNNs to identify causal information, which I consider the main issue of the paper. In my view, NAS optimizes the structure of GNNs, enhancing their efficiency or expressiveness, but it does not inherently enable GNNs to determine what type of data to model. At the very least, the authors did not provide a clear explanation of this point in the paper.\\n2. The paper lacks theoretical justification for the regulatory capability of NAS. \\n3. The survey and introduction of related work are insufficient.\", \"questions\": \"Could you explain how NAS can guide GNNs to model specific data causal relationships?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a causal-aware graph NAS framework to address the challenge of distribution shifts. By leveraging causal relationships between graph structures and architectures, the method aims to mitigate reliance on spurious correlations and enhance out-of-distribution generalization. Extensive experiments on synthetic and real-world datasets demonstrate its superior performance over existing methods in OOD settings.\\n\\nThe strengths of the work include the new scenario that combines Graph NAS and OOD generalization and the evaluation that shows the generally better performance. The weaknesses include the presentation that causes confusions on two reviewers, the incremental contribution in techniques that leverage methods directly from existing OOD graph learning works (e.g., disentangled learning) and from NAS techniques (e.g., DARTS), and some concerns on the complexity. \\n\\nBased on the strengths and weaknesses, I recommend rejection for this submission in its current form, given the high bar of ICLR. That said, I agree that the idea of combining causality with Graph NAS is innovative and the experimental results are promising. Addressing the concerns and having OOD techniques more relevant to the NAS scenario could lead to a strong contribution.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers mentioned the concern of the complexity of the method. The authors did more experiments to show that the method is not substantially slower than other models with fixed architecture. However, the authors did not try the large graph cases. The concerns on the technical novelty were also raised by two reviewers. The authors provided some high-level explanations. If the authors could better clarify that with more rigorous discussion, via formulas, theorems, etc. It might be more helpful for others to appreciate it.\"}", "{\"comment\": \"> How enhancing NAS helps in handling distribution shifts.\\n> \\n\\nA4. Thank you for your follow-up response. We would like to clarify this in two points based on `A1`:\\n\\n1. As already discussed in `A1`, the **primary goal of this work** is to address **distribution shifts during the graph neural architecture search (NAS) process *itself*.** Tackling the \\u201cdistribution shifts problem in the NAS process *itself*\\u201d is a **critical yet under-explored challenge**, as prior studies [1-3] underscore the intricate relationship between graph data and the optimal architecture.\\n2. Furthermore, solving the problem of \\u201cNAS under distribution shifts,\\u201d as outlined above, also directly **contributes to improving overall prediction generalization**. As clarified in `A1`, we conceptualize the prediction pipeline in graph tasks as graph $\\\\rightarrow$ architecture $\\\\rightarrow$ label. Existing methods typically fix the network structure and focus only on the *graph-label relationship*, overlooking the influence of the architecture itself. This oversight can lead to suboptimal results under distribution shifts. In contrast, our approach *dynamically constructs the optimal architecture for each graph instance by mitigating the effects of spurious graph-architecture relationships during the NAS process*. By **leveraging a customized causal optimal architecture tailored to each input graph instance**, our method achieves superior prediction performance under distribution shift scenarios.\\n\\nWe hope this more detailed explanation clarifies your confusion.\"}", "{\"comment\": \"We are grateful to the reviewer for the insightful comments and suggestions. We have meticulously reviewed each point and provide the following responses:\\n\\n> W1. Computational Efficiency Concerns: Section 3.3 reveals a potential limitation.\\n> \\n> \\n> W2-2. Evaluation datasets: Performance on DrugOOD or GOOD to address concerns about memory consumption and computational time. \\n> \\n> Q2. How does your method behave in the large graph compared to the previous fixed-network methods since you may search in a large network space? Will it be out of memory or the time computation will exponentially grow\\uff1f\\n> \\n\\nA1. Thank you for your valuable feedback. Since DrugOOD is similar to OGBG-Mol* (both involving molecule datasets with comparable graph sizes), we conducted additional experiments on the GOODSST2 dataset from GOOD. This dataset, converted from text sequences, presents a different domain compared to molecule datasets. Here, nodes represent words, edges indicate relations between words, and labels reflect sentence sentiment.\\n\\nWe compared performance, time, and memory costs across methods. To ensure fairness, we recorded results after 100 epochs for all methods. Additionally, since methods like DIR, GIL, and CIGA did not achieve optimal performance within 100 epochs, we also recorded their results after 200 epochs.\\n\\n| **Method** | **Acc** | **Time (Mins)** | **Mem. (MiB)** | **Method** | **Acc** | **Time (Mins)** | **Mem. (MiB)** |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| Coral | 77.28 \\u00b1 1.98 | 62 | 4820 | DANN | 77.96 \\u00b1 3.50 | 38 | 4679 |\\n| DIR | 67.90 \\u00b1 10.08 | 171 | 4891 | DIR (200 epochs) | 79.19 \\u00b1 1.85 | 326 | 4891 |\\n| GIL | 75.53 \\u00b1 6.01 | 418 | 5628 | GIL (200 epochs) | 78.67 \\u00b1 1.48 | 816 | 5629 |\\n| GSAT | 78.79 \\u00b1 1.85 | 36 | 4734 | Mixup | 78.76 \\u00b1 2.00 | 31 | 4682 |\\n| ERM | 75.99 \\u00b1 3.25 | 29 | 4667 | GroupDRO | 76.97 \\u00b1 3.49 | 28 | 4695 |\\n| IRM | 78.12 \\u00b1 1.73 | 70 | 1389 | VREx | 79.62 \\u00b1 1.26 | 27 | 4692 |\\n| CIGA | 65.62 \\u00b1 7.87 | 157 | 4683 | CIGA (200 epochs) | 79.98 \\u00b1 1.61 | 306 | 4683 |\\n| CARNAS | 80.58 \\u00b1 1.72 | 199 | 2736 | | | | |\\n\\n**CARNAS achieves the best performance while maintaining competitive time and memory costs compared to non-NAS-based (fixed-network) graph OOD methods.** The efficiency of the architecture search process stems from CARNAS's ability to **simultaneously search for the architecture and optimize its parameters, minimizing additional computational overhead**. This demonstrates the practical viability and efficiency of CARNAS, even for larger-scale graph datasets.\\n\\n*Following your advice, we have included these new results on the GOODSST2 dataset in `Appendix F` in the revised [**`pdf`**](https://openreview.net/pdf?id=58AhfT4Zz1). Additionally, we have cited both DrugOOD and GOOD for their valuable contributions to graph benchmarks.*\"}", "{\"summary\": \"This paper presents CARNAS (Causal-aware Graph Neural Architecture Search), a novel approach that addresses the challenge of distribution shifts in Graph Neural Architecture Search.\\nUnlike existing methods that rely on potentially spurious correlations between graphs and architectures, CARNAS focuses on discovering and leveraging causal relationships to achieve better generalization.\", \"the_solution_consists_of_three_main_components\": \"1. Disentangled Causal Subgraph Identification: Discovers subgraphs with stable predictive capabilities across different distributions\\n2. Graph Embedding Intervention: Works in latent space to preserve essential predictive features while removing non-causal elements\\n3. Invariant Architecture Customization: Reinforces causal invariance and uses it to design generalized architectures\\n\\nThe approach's effectiveness is validated through experiments on both synthetic and real-world datasets, demonstrating superior out-of-distribution generalization compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The application of neural architecture search to address graph out-of-distribution (OOD) problems represents a significant innovation compared to traditional fixed-backbone approaches, as demonstrated in Appendix G.3. This shift from static to adaptive architectures opens new possibilities for graph OOD generalization.\\n2. The method's effectiveness is convincingly demonstrated through comprehensive experiments on SPMotif and OGBG-Mol* datasets, where it consistently outperforms existing approaches in handling distribution shifts.\\n3. The paper stands out for its clear and organized presentation, featuring well-designed figures that effectively illustrate complex concepts, complemented by rigorous mathematical formulations that provide a solid theoretical foundation.\", \"weaknesses\": \"1. Computational Efficiency Concerns: Section 3.3 reveals a potential limitation. The method searches through an extensive space of graph neural networks across all layers, which could be computationally more intensive than existing baselines. This increased computational and memory overhead might present challenges when applied to large-scale graphs.\\n2. Limited Experimental Validation: While DIR is included as a baseline, other important causal subgraph-based methods from recent works (such as [1],[2]) are not considered. \\nAdditionally, the evaluation datasets - SPMotif (synthetic) and OGBG-Mol* (relatively small graph sizes) - leave questions about scalability. \\nIt would be valuable to see performance on larger-scale datasets like DrugOOD or GOOD to address concerns about memory consumption and computational time.\\n3. Novelty Discussion:\\nWhile the neural architecture search component is innovative, the underlying methodology shares significant similarities with existing graph OOD approaches like DIR and related works [1-3]. The core mechanisms - using weighted top-k for causal subgraph identification and random combination for spurious subgraph intervention - closely parallel previous methods. This raises questions about the method's novelty beyond the architecture search component.\\n[1] Learning causally invariant representations for out-of-distribution generalization on graphs. \\n[2]Learning invariant graph representations for out-of-distribution generalization.\\n[3] Improving subgraph recognition with variational graph information bottleneck\", \"questions\": \"1. Could you conduct an ablation study focusing on the neural network search details provided in the added Appendix C.1?\\nSpecifically, I'm interested in understanding:\\n- The effectiveness of different backbones to OOD distribution shifts, possibly illustrated through weight distributions.\\n- How about time and memory requirements during the search process?\\n- Are all of the backbones crucial for effective OOD distribution handling? \\n\\n2. How does your method behave in the large graph compared to the previous fixed-network methods since you may search in a large network space? Will it be out of memory or the time computation will exponentially grow\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary\", \"comment\": \"Dear reviewers,\", \"we_would_like_to_summarize_the_rebuttal_and_revised_paper_as_below\": \"1. We provide a **detailed theoretical analysis** of the problem to further illustrate our method in a more rigorous manner. The added theoretical analysis is highlighted in blue in\\u00a0`Appendix C`\\u00a0in the\\u00a0**updated\\u00a0[`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**.\\n2. We have **visualized the causal subgraphs** for each dataset used in the case study in\\u00a0`Appendix G, Figure 8`, in the\\u00a0**updated\\u00a0[`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**. These visualizations illustrate the learned graph-architecture relationships effectively. \\n3. We further report both the training loss and validation loss of the two components ($\\\\mathcal{L}\\\\_{causal}$ and $\\\\mathcal{L}\\\\_{pred}$) with and without the dynamic\\u00a0schedule in\\u00a0**`Figure 5`**. This analysis verifies and illustrates the impact of\\u00a0the dynamic\\u00a0schedule on the trade-off between causal structure learning and architecture optimization. The\\u00a0*newly added detailed analysis is highlighted in blue*\\u00a0in\\u00a0`Appendix E.3`\\u00a0in the\\u00a0**updated\\u00a0[`pdf`](https://openreview.net/pdf?id=58AhfT4Zz1)**.\\n4. We have included new results on the **GOODSST2 dataset**, including **11 OOD-GNN baselines**, comparing both performance, time and memory costs, in\\u00a0`Appendix F`\\u00a0in the revised\\u00a0[**`pdf`**](https://openreview.net/pdf?id=58AhfT4Zz1). Results indicate that **CARNAS achieves the best performance while maintaining competitive time and memory costs compared to non-NAS-based (fixed-network) graph OOD methods.**\\u00a0The efficiency of the architecture search process stems from CARNAS's ability to\\u00a0**simultaneously search for the architecture and optimize its parameters, minimizing additional computational overhead**. This demonstrates the practical viability and efficiency of CARNAS, even for larger-scale graph datasets.\\n\\nWe sincerely thank all the reviewers for dedicating your valuable time and effort to evaluate our work. We truly hope the responses have clarified your concerns and contributed to a better understanding of our research.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> W3. *\\\"The survey and introduction of related work are insufficient.\\\"*\\n> \\n\\nA3. Thank you for your comment. In our original paper, due to the page limitations of the main paper, we included a concise introduction of related work in `Section 5` and provided **additional review of related work in `Appendix H`**. Specifically:\\n\\n- `Appendix H.1` introduces the development of **Graph Neural Architecture Search** (GraphNAS), including foundational and recent research efforts focused on reinforcement learning and other strategies to optimize GNN architectures for various tasks, such as graph classification.\\n- `Appendix H.2` reviews advancements in **out-of-distribution generalization for GNNs**, mentioning existing methods that focus on identifying environment-invariant subgraphs to mitigate distribution shifts. It also explains our novel approach of leveraging causal relationships between graphs and architectures for robust generalization during NAS process.\\n- `Appendix H.3` discusses **causal learning on graphs**, summarizing existing methods that incorporate causality for improved representation and generalization, and highlights our focus on automating architecture design by discovering causal relationships between graphs and GNN architectures to address distribution shifts in NAS process.\\n\\n*Following your suggestion, we have further added more references and expanded the discussion of related works in `Appendix H` to provide a more comprehensive survey. If you believe there are still missing references, please feel free to let us know, and we are happy to include them.*\"}", "{\"summary\": \"This paper addresses the challenge of graph neural architecture search (Graph NAS) under distribution shifts. The authors observe that existing Graph NAS methods fail to generalize well when there are distribution shifts between training and testing data, since they may exploit spurious correlations that don't hold across distributions. To tackle this, they propose CARNAS (Causal-aware Graph Neural Architecture Search), a novel approach that discovers and leverages causal relationships between graphs and architectures.\\n\\nThe main contributions of this work is 1) it is the first work to study Graph NAS under distribution shifts from a causal perspective; 2) they propose a novel framework with a disentangled Causal Subgraph Identification to find stable predictive subgraphs, a Graph Embedding Intervention component to validate causality in latent space and Invariant Architecture Customization to handle distribution shifts. Comprehensive experiments on synthetic and real-world datasets showing superior out-of-distribution generalization compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper is the first to study Graph NAS under distribution shifts using causality. The problem is well-motivated with clear real-world relevance and applications.\\n2. The authors present comprehensive experiments on both synthetic and real-world datasets that demonstrate clear performance improvements over existing baselines. The thorough ablation studies effectively validate each component of the proposed method, and the analysis provides valuable insights into the model's behavior.\\n3. The paper is well-structured. The experimental analysis is clearly presented, making the work reproducible.\", \"weaknesses\": \"1. While the paper shows good performance on the tested datasets, it lacks a detailed analysis of computational complexity and memory requirements. Specifically, the time complexity of $O(|E|(d_0 + d_1 + |O|d_s) + |V|(d_0^2 + d_1^2 + |O|d_s^2) + |O|^2d_1)$ could become prohibitive for very large graphs. The authors should discuss how their method performs on graphs with millions of nodes and edges, which are common in real-world applications like social networks.\\n2. The method requires careful tuning of four critical hyperparameters ($t$, $\\\\mu$, $\\\\theta_1$, $\\\\theta_2$), which may significantly impact performance. In particular, the edge importance threshold t in Eq.(6) and the intervention intensity $\\\\mu$ in Eq.(10) show high sensitivity in experiments. While the authors provide some sensitivity analysis on BACE dataset, they don't fully explain how to effectively tune these parameters for new datasets or application domains. \\n3. The paper lacks formal theoretical guarantees for the causal discovery process. While the empirical results are strong, the authors should clarify under what conditions their method is guaranteed to identify true causal relationships and provide bounds on the probability of discovering spurious correlations. Additionally, the relationship between the intervention loss and causal invariance could be more rigorously established.\", \"questions\": \"1. In the graph embedding intervention module, you use $\\\\mu$ to control intervention intensity in Eq.(10): $H_{v_j} = (1-\\u03bc)\\u00b7H_c + \\u03bc\\u00b7H_{s_j}$. Have you considered using adaptive intervention strategies where $\\\\mu$ varies based on the structural properties of $G_c$ and $G_s$? This could potentially better handle graphs with varying degrees of spurious correlations.\\n2. The overall objective function (Eq.(17)) uses a linearly growing $\\\\sigma_p$ corresponding to epoch number. Could you elaborate on why linear growth was chosen over other schedules (e.g., exponential, step-wise)? How does the schedule of $\\\\sigma_p$ affect the trade-off between causal structure learning and architecture optimization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
57yOS3nIVm
Divide and Conform: Unleashing Spatial Filter Atoms for Unsupervised Target Transferability
[ "Gaurav Patel", "Qiang Qiu" ]
The straightforward fine-tuning of the pre-trained model for the target task, bears the risk of under-utilizing the foundational knowledge accrued by the pre-trained model, resulting in the sub-optimal utilization of transferable knowledge, consequently impeding peak performance on the target task. To address this, we introduce $\textit{Divide and Conform}$, aimed at augmenting the transferability of pre-trained convolutional neural networks (ConvNets), $\textit{in the absence of base data}$. This strategy exploits the mathematical equivalence of the convolution operation, conceptualizing it as a two-step process involving spatial-only convolution and channel combination. To achieve this, we decompose ($\textit{Divide}$) the filters of pre-trained ConvNets into spatial filter atoms (responsible for spatial-only convolution) and their corresponding atom-coefficients (responsible for channel combination). Our observations reveal that solely fine-tuning ($\textit{Conform}$-ing) the spatial filter atoms, comprising of only a few hundred parameters, renders the transferability of the model efficient, without compromising on the predictive performance. Simultaneously, the static atom-coefficients serve to retain the base (foundational) knowledge from the pre-trained model. We rigorously assess this dual-faceted approach within the demanding and practical framework of cross-domain few-shot learning, showcasing the approach's substantial capability of transferring the knowledge in a parameter-efficient manner.
[ "Filter Decomposition", "Domain Transferability", "Efficiency" ]
https://openreview.net/pdf?id=57yOS3nIVm
https://openreview.net/forum?id=57yOS3nIVm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "p5rf2oO2dX", "olTNqNjEOY", "ZZJpizZvLN", "W5c2AzbTol" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730535373579, 1730650775371, 1731534256469, 1730715499029 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7801/Reviewer_mEe3" ], [ "ICLR.cc/2025/Conference/Submission7801/Reviewer_kZRN" ], [ "ICLR.cc/2025/Conference/Submission7801/Authors" ], [ "ICLR.cc/2025/Conference/Submission7801/Reviewer_3kt8" ] ], "structured_content_str": [ "{\"summary\": \"This paper argues that directly fine-tuning pre-trained models carries the risk of insufficiently leveraging the foundational knowledge accumulated during pre-training, which may adversely affect performance on target tasks. To address this issue, this paper decomposes the convolution kernel into two components: spatial filter atoms and atom-coefficients. During the fine-tuning phase, this paper only fine-tunes the spatial filter atoms, thereby achieving fine-tuning with fewer parameters.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"+\\uff1aIn the context of parameter-efficient tuning, this paper decomposes convolutional kernels into two components, and focuses on fine-tuning spatial filter atoms while retaining existing knowledge to effectively transfer to target tasks. This approach provides a different perspective on fine-tuning tasks in convolutional layers\\n\\n+: The method is straightforward and easy to understand.\", \"weaknesses\": \"-: The motivation of this paper is not very clear. As stated in line 66-71, this paper points out three challenges: (1) lack of the base dataset, (2) heavy computational cost of full finetuning, and (3) scarcity of labeled target data. They are common issues and have been well studied in previous parameter-efficient tuning works. Therefore, what are advantages of this work over previous parameter-efficient tuning works?\\n\\n-: The technical contribution of this work could be further clarified. Particularly, compared to previous parameter-efficient tuning works, this paper relies on convolutional kernel decomposition techniques. Therefore, the authors would better discuss the advantages of the introduced convolutional kernel decomposition over existing works.\\n\\nAdditionally, dictionary learning for kernel decomposition involved in this work is complex and will bring significant computational overhead. More importantly, how about its scalability to large-scale convolutional neural works and vision transformers. Particularly, vision transformers are most widely used as backbones of pre-trained models.\\n\\nFor the experiment part, it seems to lack sufficient analysis on why only fine-tuning of the spatial filter atoms yields effective results. Furthermore, it also lacks experimental support on how freezing of the atom-coefficients preserves the knowledge of the pre-trained network. Additionally, the authors would better conduct more experiments with larger convolutional kernels and the larger model sizes to show the generalizability of the proposed methods.\", \"questions\": \"How is LoDC implemented on ResNet18?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors introduce Divide and Conform, aimed at augmenting the transferability of pre-trained\\nconvolutional neural networks (ConvNets), in the absence of base data. \\nIt is a two step process , spatial only convolutions and channel combination.\\n\\nAuthors claim that their approach is designed to enhance the adaptability of pre-trained models to specific\\ntarget tasks, while assuming only a limited amount of unlabeled data is available for the target task and no access\\nto the extensive base dataset, achieving all this in a parameter-efficient manner.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well written and easy to follow.\\nDetailed anaylysis has been done on several datasets. \\nSeveral quantitative results are presented.\", \"weaknesses\": \"I take from the results on EuroSAT dataset message that the proposed method found it hard to learn discriminative features\\nas compared to other methods.\\nI would be great to see some qualitative results. \\nI am new to this direction of research, for me, I am trying to see sparsity and your method together, how are they different or similar_\", \"questions\": \"Please follow weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"Exploring the transfer of pre-trained model knowledge in cross-domain few-shot learning settings is valuable. This manuscript introduces \\\"Divide and Conform,\\\" a method designed to enhance the transferability of pre-trained convolutional neural networks (ConvNets) without relying on base data. The approach involves fine-tuning only the decomposed spatial filter atoms while keeping the atom-coefficients frozen, facilitating cross-domain transfer. Evaluated on multiple benchmark datasets, the proposed method demonstrates efficient knowledge transfer with minimal parameter adjustment.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Exploring new efficient fine-tuning methods for transferring diverse pre-trained models is meaningful.\\n\\n2. The paper provides a clear and logical description and definition of the proposed method.\\n\\n3. The approach achieves comparable results.\", \"weaknesses\": \"1. The manuscript elaborates in detail on research progress in cross-domain few-shot learning and spatial filtering decomposition in the second section (Related Works). However, discussions of the latest related works are lacking. Additionally, the connections and distinctions between this manuscript and existing works in the field should be carefully explained.\\n\\n2. The arrangement of tables and figures should align with the textual content to facilitate reader comprehension and comparison.\\n\\n3. In the experimental section, the manuscript should include comparisons with the latest cross-domain few-shot learning methods. Additionally, SimCLR is a straightforward framework for contrastive learning of visual representations. The authors should compare their approach with more mainstream and efficient parameter-tuning methods, such as vision prompt tuning.\\n\\n4. The experimental results indicate that the proposed method does not outperform all baselines comprehensively. The authors should provide a detailed explanation of this.\", \"questions\": \"1.In this manuscript, the authors conduct a detailed comparison of the proposed method\\u2019s performance with SimCLR and LORA-style methods under a cross-domain few-shot learning setting. Would the task benefit from stronger parameter-tuning performance under the setups of SimCLR and LORA-style methods?\\n\\n2.Does the proposed method still demonstrate an advantage on more challenging benchmarks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
57xboRTbwI
Bias Analysis in Unconditional Image Generative Models
[ "Xiaofeng Zhang", "Simon Lacoste-Julien", "Aaron Courville", "Yash Goyal" ]
The widespread usage of generative AI models raises concerns regarding fairness and potential discriminatory outcomes. In this work, we define the bias of an attribute (e.g., gender or race) as the difference between the probability of its presence in the observed distribution and its expected proportion in an ideal reference distribution. Despite efforts to study social biases in these models, the origin of biases in generation remains unclear. Many components in generative AI models may contribute to biases. This study focuses on the inductive bias of unconditional generative models, one of the core components, in image generation tasks. We propose a standardized bias evaluation framework to study bias shift between training and generated data distributions. We train unconditional image generative models on the training set and generate images unconditionally. To obtain attribute labels for generated images, we train a classifier using ground truth labels. We compare the bias of given attributes between generation and data distribution using classifier-predicted labels. This absolute difference is named bias shift. Our experiments reveal that biases are indeed shifted in image generative models. Different attributes exhibit varying bias shifts' sensitivity towards distribution shifts. We propose a taxonomy categorizing attributes as $\textit{subjective}$ (high sensitivity) or $\textit{non-subjective}$ (low sensitivity), based on whether the classifier's decision boundary falls within a high-density region. We demonstrate an inconsistency between conventional image generation metrics and observed bias shifts. Finally, we compare diffusion models of different sizes with Generative Adversarial Networks (GANs), highlighting the superiority of diffusion models in terms of reduced bias shifts.
[ "image generative models", "bias analysis", "distribution shift" ]
Reject
https://openreview.net/pdf?id=57xboRTbwI
https://openreview.net/forum?id=57xboRTbwI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sG2qAJXoif", "ZFmqlcymFp", "MvpzefSmgX", "Gbd9o6Yb2m", "CIZ3TQoRN5", "737mowtUYm", "57CDqxdNPP", "4BfiaevHwj" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1735085507603, 1732694512916, 1733191112149, 1730709030899, 1730582048580, 1730622293700, 1737524163113, 1730719183272 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12048/Area_Chair_RNWo" ], [ "ICLR.cc/2025/Conference/Submission12048/Reviewer_FbqA" ], [ "ICLR.cc/2025/Conference/Submission12048/Authors" ], [ "ICLR.cc/2025/Conference/Submission12048/Reviewer_WsGs" ], [ "ICLR.cc/2025/Conference/Submission12048/Reviewer_QibE" ], [ "ICLR.cc/2025/Conference/Submission12048/Reviewer_FbqA" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12048/Reviewer_FJJr" ] ], "structured_content_str": [ "{\"metareview\": \"This work investigates the manifestation of bias in unconditional image generation process. The core idea is to use a classifier (trained on the same data that was used to train the generation process) as a tool to measure the bias gap between train and gen distributions.\\n\\nThe problem of studying inductive biases of generative models is an important one and an effort along this line is well appreciated. \\n\\nThere are several concerns raised by the reviewers regarding this work. The major ones are:\\n\\n- the definition of bias which seems to be too simplistic leading to a metric called ABS which might not be effective enough to quantify biases. \\n- lack of proper justification behind the use of a single classifier as a tool to quantify biases -- a common concern by all the reviewers and myself. Several valid points have been raised regarding this and the rebuttal lacked convincing replies. \\n- outdated models under consideration (e.g., BigGAN)\\n\\nThis paper requires solid justifications behind the use of the classifier and better experiments on recent generative models.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewers raised several concerns regarding such as (1) the definition of bias term; (2) use of single classifier; (3) use of outdated models etc.\", \"Authors did engage well during the rebuttal however, the above concerns were not answered satisfactorily and unfortunately there was no clear indication towards the acceptance of this work in its current form.\"]}", "{\"comment\": \"Many thanks to the authors for their detailed responses. The response addresses some of my concerns, and I have increased my rating. However, I don't think the response on the classifier selection is very convincing. On many attributes, the classifier can have high accuracy and align with previous literature, but this does not necessarily suggest that the analysis on those attributes with low accuracy is correct. Also, even though the generated data seems to have a similar distribution to the real data, it does not guarantee that the classifier trained on the real data will maintain its performance on the generated data. Literature (e.g., works in adversarial attacks) has shown that even a slight change in the data distribution may largely affect the model's performance. Thus, I suggest the paper perform a more detailed analysis of this.\"}", "{\"comment\": \"Thank you for the feedback and acknowledgment of our justifications. We would greatly appreciate if the reviewer could provide further clarifications for the following:\\n1. What specific type of detailed analysis would, in the reviewer's view, ensure that a classifier trained on real data maintains its performance when applied to generated data, particularly in the context of unconditional image generation? \\n2. Is there any additional input from you to help refine the analysis on attributes with lower accuracy, beyond the methods we have already explored in the paper, particularly with respect to achieving the best possible accuracy and aligning with previous literature?\"}", "{\"summary\": \"This paper presents a framework for bias evaluation of unconditional image generative models. The authors measured the bias shift in the original and synthetic data and tested their framework in publicly available datasets. They found that, bias shift happens in image generative models and proposed two taxonomies to categorize the bias shift for different attributes. The paper is well-written and well-formulated. However, a comparison with the existing bias evaluation framework needs to be made.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work presents a bias evaluation framework for unconditional image generative models.\\n2. The authors proposed two taxonomies for categorizing bias shifts for different attributes.\\n3. The authors experimented with different sizes of diffusion models to observe how bias shift is happening.\", \"weaknesses\": \"1. As this paper presents a bias evaluation framework for image dataset, it needs to be compared with other evaluation framework, i.e. compare with [1]. How is the presented framework differ with the [1]?\\n\\n2. Limitations of this evaluation framework should be discussed in the paper.\\n\\n#### References:\\n\\n[1] Wang, Angelina, et al. \\\"REVISE: A tool for measuring and mitigating bias in visual datasets.\\\"\\u00a0_International Journal of Computer Vision_\\u00a0130.7 (2022): 1790-1810.\", \"questions\": \"See weakness point 1\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an analysis of non-conditional generative models, GANs and Diffusions, for image generation. By using a classifier trained on the same dataset, biases are identified as subjective and non-subjective attributes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper analyzes an important topic of bias in generative models. These models have been shown to learn the biases from their datasets, and this paper proposes a new angle towards understanding these biases.\\n\\nThe paper is generally well written and easy to follow.\\n\\nThe detailed analysis of logits seems to not have been studied before.\", \"weaknesses\": [\"As a preliminary, I have reviewed this paper before for the NeurIPS SafeGEN workshop. I have reread this version of the paper, and my opinion has not changed.\", \"Here's my concerns:\", \"Calling attributes subject vs non-subjective is very strange. For example, \\\"Pale skin\\\" or \\\"male\\\" in CelebA being a non-subjective attribute is surprising. I would be convinced if there were a user study to validate these attributes are similarly subjective to humans, but as it stands I'm not convinced.\", \"The raw bias shift is strikingly small. The subjective logits on figure 5c look extremely similar between the synthetic versus real data. Especially when comparing 5c to 5e, it's surprising that Male landed in non-subjective and Smiling did not.\", \"Using the same dataset for training the generator and classifier is very problematic: it's self-contamination. The bias studied in this paper could come from: the dataset itself, the generator's training/architecture, or the classifier's training/architecture. Given the generator and classifier are mapped to the same data distribution, the inherit biases are muddled between the two. I would have liked to seen dataset splits where half the data is used to train the generator and half the classifier. That would improve the self contamination issue substantially.\", \"Finally, the actual take-aways from the paper are fairly limited. Assuming my previous point were address, the fundamental why question is not answered: why are some attributes represented more/less in the synthetic distribution. It is somewhat useful to know that some attributes are, but I would be very interested to know how to predict which attributes would be over/under represented by just training a classfier.\"], \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an evaluation pipeline to analyze the bias shift of different generative models. The generative models are trained on the training dataset, and an attribute classifier is also pretrained on the same dataset. The attribute prediction difference between the original dataset and the generated images measures the bias shift. The paper also separates the attribute into two categories, subjective and non-subjective, to further analyze the insight of the bias shift.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In general, the analysis of the bias shift of different generative models is interesting, and the pipeline's high-level idea seems to be sound. The subjective/non-subjective study is also interesting. The paper includes a vast amount of empirical results for the analysis.\", \"weaknesses\": \"1. Some of the discussion in the method section (sec. 3) seems to be redundant or not tied to the paper. For example, what is the purpose of introducing $P^{ideal}$? Although it is canceled in the final equation, I don't think it is necessary to introduce such a term because, intuitively, $|{P^{gen} - P^{val}}|$ itself is sufficient to measure the bias shift. Introducing extra and probably unnecessary assumptions may overcomplicate the method and lead to confusion. Also, Section 3.1 also introduces a definition of \\\"conditional bias,\\\" which is not discussed or studied in the rest of the paper. What is the purpose of introducing this definition?\\n\\n2. The paper claims (L142) that \\\"pre-trained models introduce their own biases, rendering the predicted labels unreliable for accurate bias evaluation.\\\" However, I disagree with this argument. I agree that the pre-trained model may be biased, but this reason does not invalidate them for performing attribute classification. Such a classifier serves as the expert in labeling attributes so that the most important criterion, if not the only criterion, should be classification accuracy. If any pre-trained models have outstanding attribute classification performance on the training/val dataset, I don't see why they shouldn't be used. Further, those pre-trained models can be finetuned on the training dataset (which this paper did) for an even better classification performance on specific datasets (e.g., CelebA), which can only benefit the bias shift analysis. \\n\\n3. Further, the accuracy of the classifier is not sufficient for the analysis. Although the accuracies of the majority of attributes are 90%+, there is still a considerable amount of attributes on which the classifier performs unsatisfying. This fact is critical to the analysis, considering the listed subjective attribute examples are placed in the lower portion of the performance list. Lower accuracy may suggest higher analysis noise and larger ABS measuring error. Since the ABS for non-subjective attributes and subjective attributes are ~1% and 3-5%, the classifier with 91.7% (lowest attribute: 68.34%) or 90.5% (lowest attribute: 71.65%) accuracy is not good enough. \\n\\n4. Further, the classifier is trained on the training set and directly applied to both training, valid, and generation sets. However, unlike training and valid sets are sampled from the same distribution, the generation set may have a different distribution than the original dataset. Thus, the classifier may suffer from distribution shift and/or visual domain generalization challenges, so the classifier may not be reliable on the generation set. This issue can further weaken the paper's analysis and conclusion. \\n\\n5. Although the attempt to split the attribute into subjective and non-subjective groups is interesting, I am not convinced that the splitting method used in the paper (decision boundary-based) is valid. The decision boundary is closely connected with classifier accuracy, which can be further connected with analysis noise and measuring errors. Thus, those attributes with unclear boundaries are more likely to have higher ABS errors. Additionally, this splitting may not match human's definition of \\\"subjective.\\\" Those \\\"subjective attributes\\\" to human definition (e.g., wearing glasses) may be easier to be classified so that they may have clearer boundaries. However, there is no guarantee of this, and the paper also does not have a complete list of subjective and non-subjective attributes to verify.\", \"questions\": \"Please refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper investigates how inductive bias in unconditional generative models affects bias in generated results. The authors define bias shift as the difference between the probability of attribute presence in the training and generated distributions, and train a classifier to categorize attributes to quantify bias shift. Furthermore, attributes are categorized as subjective or non-subjective based on the position of the classifier's decision boundary. The author validates multiple models including diffusion models and GAN on two datasets, CelebA and DeepFusion, revealing related patterns.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Problem Definition: The paper focuses specifically on studying the inductive bias of generative models themselves, avoiding other factors such as dataset bias and prompts, providing a novel perspective for analyzing bias sources.\\n\\n2. Methodology: Proposes a standardized bias evaluation framework that uses the same classifier for all label predictions, ensuring consistency in evaluation.\\n\\n3. Writing: The paper is well-structured and explains complex concepts in an understandable way.\", \"weaknesses\": \"1. Bias Definition: The paper's definition of bias, which only measures differences in attribute occurrence probabilities, may be overly simplistic. Bias typically encompasses more complex dimensions including social prejudices and systemic discrimination.\", \"oversimplified_metrics\": \"The Average Bias Shift (ABS) metric may be too reductive as it:\\n\\n2. ABS doesn't consider correlations between attributes and ignores the varying social impact weights of different attributes\\n\\n3. The 0.01 threshold for subjective/non-subjective classification lacks justification, and there are no ablation studies on threshold selection. The data-driven categorization approach may overlook inherent social and ethical implications of attributes\\n\\n4. Relying on a single classifier may introduce classifier-specific biases. Figures 4 and 5 show relatively low accuracy (90% or below) for many attributes, questioning the reliability of the pre-trained classifier. It would be better to consider using Large Language Models as supplementary evaluators (Just a suggestion, no need to add experiments).\\n\\n5. The current model selection appears dated, primarily relying on ADM (2021) and BigGAN (2019) for experiments. They may not reflect the latest advances in generative modeling. The paper would benefit significantly from validating the proposed framework on more recent architectures, such as Stable Diffusion, DiT, PixArt-\\u03b1 for diffusion models, and StyleGAN3 for GANs.\", \"questions\": \"Do the accuracy rates reported in Figures 4 and 5 of the appendix refer to training set or validation set performance?\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
57iQSl2G2Q
Safe Bayesian Optimization for Complex Control Systems via Additive Gaussian Processes
[ "Hongxuan Wang", "Xiaocong Li", "Lihao Zheng", "Adrish Bhaumik", "Prahlad Vadakkepat" ]
Controller tuning and optimization have been among the most fundamental problems in robotics and mechatronic systems. The traditional methodology is usually model-based, but its performance heavily relies on an accurate mathematical system model. In control applications with complex dynamics, obtaining a precise model is often challenging, leading us towards a data-driven approach. While various researchers have explored the optimization of a single controller, it remains a challenge to obtain the optimal controller parameters safely and efficiently when multiple controllers are involved. In this paper, we propose SafeCtrlBO to optimize multiple controllers simultaneously and safely. We simplify the exploration process in safe Bayesian optimization, reducing computational effort without sacrificing expansion capability. Additionally, we use additive kernels to enhance the efficiency of Gaussian process updates for unknown functions. Hardware experimental results on a permanent magnet synchronous motor (PMSM) demonstrate that compared to existing safe Bayesian optimization algorithms, SafeCtrlBO can obtain optimal parameters more efficiently while ensuring safety.
[ "Safe Bayesian Optimization", "Complex Control Optimization", "Additive Gaussian Processes" ]
Reject
https://openreview.net/pdf?id=57iQSl2G2Q
https://openreview.net/forum?id=57iQSl2G2Q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yRawFAdN8A", "xemuVQKsQx", "uJDOxsmTNI", "n7o0yLy9le", "mLMWUmNXov", "fFXd69Wy1T", "dHe5kHtMnf", "cRaQ7X66PW", "bWN9JM8puk", "avyWupDe8K", "YMXSr9cwq8", "Vwt6RrFuJO", "VXcvXWcXJo", "UyjKUsGjRf", "UZQfrDtSXu", "TV60KWntrj", "T7H3OHW104", "S6XZQ0CyZM", "QIR7yUj85o", "Ms2oW7MFxm", "KYOiJkbhBJ", "HHH1Tp0u3l", "GPpHWXF8qm", "BogziUqFXJ", "BjruOaIKV4", "BR6KedqegB", "8gnwJblo2G", "8HcMRMPuXv" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732072258434, 1732686526346, 1732676243920, 1732071653063, 1732075306966, 1730497445020, 1734949091949, 1732624204153, 1730586364262, 1732071517508, 1732683827324, 1732074048424, 1733196217283, 1732074742933, 1732542865889, 1732582336815, 1732074550690, 1732678551279, 1732072577389, 1732075013105, 1732074959574, 1729536413066, 1737523935694, 1732128929829, 1732074141285, 1730669332779, 1732114797889, 1732073471527 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_hSAs" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_QXk1" ], [ "ICLR.cc/2025/Conference/Submission8833/Area_Chair_Lgqd" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_QXk1" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_R4HB" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_zLs8" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_zLs8" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_hSAs" ], [ "ICLR.cc/2025/Conference/Submission8833/Reviewer_R4HB" ], [ "ICLR.cc/2025/Conference/Submission8833/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank the reviewer for the time and effort dedicated to evaluating our manuscript, especially for providing an in-depth understanding of the theoretical results. Your perceptive comments and valuable suggestions have significantly improved the quality and clarity of our paper. Below, we address the concerns raised and respond to the reviewer\\u2019s questions. The corresponding modifications in the paper have been highlighted in blue.\\n\\n---\\n\\n### **Comment 1**\\n\\n> \\u201cThe theoretical results are mostly stated using plain text instead of mathematical formulas.\\u201d\\n\\n**Response:** \\nWe thank the reviewer for the suggestion and have made the presentation of the theoretical results as formal as possible. Please refer to the updated PDF.\\n\\n---\\n\\n### **Comment 2**\\n\\n> \\u201cThe statement \\\"Srinivas et al. (2010) [\\u2026]\\\" is incorrect.\\u201d \\n> \\u201cSimilarly, the statement \\\"However, SAFEOPT [\\u2026]\\\" is inaccurate.\\u201d\\n\\n**Response:** \\nWe sincerely thank the reviewer for the valuable corrections to the related work section, and we have made the necessary revisions to the corresponding content in the paper. [1] derived the regret bounds for Gaussian Process (GP) optimization and established the relationship between the cumulative regret of GP-UCB and the maximum information gain. Since the exploitation stage of our method employs GP-UCB as the acquisition function, the proof of Theorem 5.2 directly builds upon the conclusions of [1]. \\n[2, 3] did not specify a particular type of kernel, instead referring broadly to \\\"some kernels.\\\" [4] utilized the Mat\\u00e9rn kernel with $ \\\\nu = 3/2 $ in their experimental section and included the Gaussian kernel as an example in their official implementation. Similarly, [5] employed the Mat\\u00e9rn kernel with $ \\\\nu = 5/2 $. As $ \\\\nu $ increases, the Mat\\u00e9rn kernel approaches the Gaussian kernel, whereas at smaller $ \\\\nu $ values, the Mat\\u00e9rn kernel exhibits stronger locality.\\n\\n---\\n\\n### **Comment 3**\\n\\n> \\u201cIs equation (1) correct? Shouldn't the last term have t in the subscript instead of k? The same holds for equation 2.\\u201d\\n\\n**Response:** \\nIn the context of control, signals, and systems, $ k $ is conventionally used to denote the time step in discrete systems or signals, while $ t $ is used for the time variable in continuous systems or signals. Discrete controllers are typically implemented using digital processors, microprocessors, or computers, whereas continuous controllers are generally realized using analog electronic components, such as capacitors, resistors, and operational amplifiers. \\nSince the hardware experiments in this paper, as well as in related works (e.g., drone control [4] or robot control), involve discrete controllers, equations (1) and (2) adopt $ k $ to represent the time step. This choice aligns with the practical implementation of discrete control systems in digital hardware.\\n\\n---\\n\\n### **Comment 4**\\n\\n> \\u201cI feel that Theorems 4.1 and 4.2 are mostly trivial and should be placed in the appendix.\\u201d\\n\\n**Response:** \\nWe thank the reviewer\\u2019s suggestion and have moved Theorems 4.1 and 4.2 to the Appendix.\\n\\n---\\n\\n### **References**\\n\\n1. Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: no regret and experimental design. In *Proc. of the International Conference on Machine Learning (ICML)*, pp. 1015\\u20131022, 2010.\\n2. Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with gaussian processes. In *Proc. of the 32nd International Conference on Machine Learning (ICML)*, pp. 997\\u20131005, Lille, France, 2015.\\n3. Yanan Sui, Vincent Zhuang, Joel Burdick, and Yisong Yue. Stagewise safe bayesian optimization with gaussian processes. In *Proc. of the 35th International Conference on Machine Learning (ICML)*, pp. 4781\\u20134789, Stockholm, Sweden, 2018.\\n4. Felix Berkenkamp, Angela P. Schoellig, and Andreas Krause. Safe controller optimization for quadrotors with gaussian processes. In *Proc. of 2016 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 491\\u2013496, Stockholm, Sweden, 2016.\\n5. Marcello Fiducioso, Sebastian Curi, Benedikt Schumacher, Markus Gwerder, and Andreas Krause. Safe contextual bayesian optimization for sustainable room temperature pid control tuning. In *Proc. of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)*, pp. 5850\\u20135856, 2019.\"}", "{\"title\": \"A gentle reminder for new explanations and updated PDF\", \"comment\": \"Dear Reviewer,\\n\\nThis is a gentle reminder that we have added new content to address your concerns in the updated PDF. Please refer to Theorem D.3 and Appendix E. Thank you again for helping us revisit the functionality of our proposed acquisition function.\"}", "{\"comment\": \"Thank you for the detailed response. Now I understand that the 6 parameters are tuned simultaneously whereas the 3x2 parameters for the quadcopters are optimized independently. I still believe that cascade control is not a good example for the approach. In motor control, the design of the cascade is typically not an iterative process, instead there are clear rules how to tune each stage. Not saying that this example is useless but there are much better systems, which would strengthen the paper.\\n\\nHowever, I\\u2019m satisfied by some of the authors updates and will raise my score.\\n\\nThanks to the authors for their work and I\\u2019ll looking forward to the future work.\"}", "{\"comment\": \"### **Comment 3**\\n\\n> \\u201cI agree that controller tuning including many parameters can be tricky. However, cascaded controllers are a bad example of that.\\u201d \\n> \\u201cI suggest something like distributed controllers in large water/power networks.\\u201d\\n\\n**Response:** \\nThe reviewer's comment regarding cascade controller tuning is accurate in the context of traditional industrial applications. However, achieving precise control with this approach typically requires iterative adjustments, progressing sequentially from the innermost to the outermost layer. Since parameter changes in the outer controllers can affect the optimality of the inner controllers, it becomes necessary to cycle back to the innermost layer for re-tuning, followed by further adjustments towards the outermost layer. This iterative process continues until the system's performance reaches a near-optimal state. To enhance optimization efficiency, simultaneous tuning of both inner and outer controllers becomes essential, allowing for precise control of cascade structure controllers. \\n\\nFurthermore, it is worth noting that in the inner loop, there are two controllers operating in parallel rather than in a cascade configuration. This distinction makes our problem more challenging compared to typical cascade control.\\n\\nIn addition, as noted in lines 533-535 of the paper (currently in lines 505-507 in the updated PDF), the proposed algorithm is not constrained to cascade controllers. We highlighted various other examples of multi-controller systems in lines 29-34 and lines 84-89. The distributed controller systems mentioned by the reviewer are also excellent examples. Due to equipment limitations, our experiments were conducted on a permanent magnet synchronous motor (PMSM) platform. Nevertheless, we are optimistic that future opportunities will allow us to apply the algorithm to larger and more complex control platforms, as suggested by the reviewer.\\n\\n---\\n\\n### **Comment 4**\\n\\n> \\u201cFurthermore, motor controllers are a bad example for safe BO. As long as no load is attached to the motor [\\u2026] it is almost impossible to destroy the system. Typical amplifiers do have a \\\"max current\\\" setting, or it is simply defined in the software.\\u201d\\n\\n**Response:** \\nIn fact, although the permanent magnet synchronous motor (PMSM) operates in a no-load state during controller parameter tuning, certain controller parameters will still make the entire system unstable during our experiment. In such cases, the speed tracking error increases over time, requiring human intervention to forcibly shut down the system. Safe BO algorithms can prevent the selection of such parameters, thereby ensuring the system remains in a stable state throughout the process.\\n\\nWhile setting saturation limits for controller outputs, currents, or voltages in the software can also mitigate instability, this approach relies on specific domain knowledge. Moreover, implementing such saturation constraints effectively alters the distribution of the performance function. For instance, when the control parameters reach certain thresholds, the controller output saturates, and further increases in control parameter values no longer affect the performance function. This manual modification of the performance landscape can pose challenges for safe BO algorithms, potentially making it harder for them to converge to the optimal parameters.\\n\\n---\\n\\nWe sincerely appreciate your insightful comments and the time invested in providing detailed feedback. We hope our revisions and responses effectively convey the intent of our work and address your concerns. If any issues or questions remain, please do not hesitate to raise them\\u2014we are more than happy to offer further clarifications.\"}", "{\"comment\": \"### **Comment 5**\\n\\n> **Question on \\\"can be seamlessly integrated into real-world complex control applications\\\":** \\n\\n**Response:** \\nWe appreciate the reviewer highlighting this point. The statement \\\"can be seamlessly integrated into real-world complex control applications\\\" underscores the algorithm's user-friendliness. Users simply need to define the performance and safety functions according to the task and configure the base kernels and main hyperparameters. The algorithm can then be applied without complex re-implementation. \\n\\nThis is evident from the implementation details provided. Whether applied to synthetic simulations or hardware experiments, the overall implementation process remains the same. The only differences lie in the definition of the performance and safety functions, as well as the selection of base kernels and the main function hyperparameters. \\n\\n---\\n\\n### **Comment 6**\\n\\n> **Suggestion on additional experiments:** \\n> \\u201cIn the synthetic experiments, only a noise free setting is evaluated. The results would be more relevant and would underline the contribution of the method more, if function evaluations would be noisy. In addition, this setting would also better represent real-world settings.\\u201d\\n\\n**Response:** \\nWe thank the reviewer for the suggestion regarding noisy synthetic experiments. While we believe that the noisy hardware experiments sufficiently represent real-world settings, and baseline synthetic experiments in Kirschner et al. (2019) and Bardou et al. (2024) are also noise free, we are willing to incorporate noisy settings into the synthetic benchmarks for better illustration. \\n\\nAs these experiments are time-consuming (especially experiments with DuMBO), we will update the results in another updated PDF once they are completed. \\n\\n---\\n\\nWe greatly appreciate the reviewer's insightful comments and the time the reviewer dedicated to providing detailed feedback. We trust that our revisions and responses clearly convey the intent of our work and address the concerns. If any issues or questions remain, please feel free to bring them up. We would be happy to provide any additional clarifications.\"}", "{\"summary\": \"The paper addresses safe Bayesian optimization (BO) to optimize the parameters of cascaded control systems. To make safe BO more suitable for this task, additive kernels are used as a model, and a new definition of expander sets is introduced, which makes the BO optimization more compute efficient. The method is benchmarked on synthetic functions and a hardware setup, tuning parameters for a field-oriented control algorithm in a permanent magnet synchronous motor. Although the proposed method outperforms other benchmarks, all tested safe BO methods result in safety constraint violations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(S1) Clear motivation and objective\\n\\n(S2) Demonstration and benchmarking of different safe BO algorithms on synthetic function and in a real-world use case. The application of multiple algorithms as baselines makes the results significant for the BO community. \\n\\n(S3) The provided implementation gives clear insights into the method and the experiments. \\n\\nIn general, the contribution is reasonable. Focusing on the border of the safe set to reduce computational effort is a logical approach. Additive kernels seem to be able to enhance the performance of safe BO algorithms for high-dimensional systems. However, it remains unclear how to choose kernel hyperparameters in practice.\", \"weaknesses\": \"I see the following main weaknesses:\\n\\n(W1) The embedding of related work is insufficient. There are lines of work, which seem closely related, but are not cited appropriately. \\n\\n(W2) Theoretical results are partially imprecise. Some of the results have been known before and build on previous results, yet these are not cited. \\n\\n(W3) In the empirical study, information is missing on how to apply the algorithm and how hyperparameters are chosen. The choice of heuristics is not sufficiently documented and discussed.\\n\\n(W4) Safety violations in hardware experiments are not adequately discussed.\\n\\nBelow, I provide a more detailed discussion of these weaknesses, ordered by sections.\\n\\n\\n## Related work:\\n\\n* In related work, Bottero et al. 2022 is missing. It would make sense to compare against this, as this work explicitly focuses on safe BO without expander sets. \\n* Furthermore, studies on safe/constrained BO for cascaded control systems, such as Khosravi et al. 2022, seem directly related but are not mentioned. \\n\\n\\n## Theoretical Results: \\n\\n* Lemma 4.1 is a known result that can be found in e.g., Berlinet and Agnan-Thomas 2004 Theorem 5\\n* Theorem 4.2 is imprecise; further information on this can be found in Fiedler 2023 (Section 4) \\n* The theoretical results section clearly builds on Chowdhury and Gopalan 2017, Theorem 2, using their results for confidence bounds without referencing this paper. \\n* There are more recent results on confidence bounds in GP Regression, which lead to more conservative bounds and do not rely on the information gain. Information on this can be found in Whitehouse et al. 2024\\n* Theorem 5.1: The RKHS norm of the safety functions is bounded by $B$ , not the safety functions themselves\\n\\n## Empirical Study \\n\\nFor the stage-wise approach it is unclear, when to go to the next stage. This can only be chosen through a heuristic. In the paper it remains unclear, how to choose this in a practical application. \\n\\nIn the synthetic experiments, only a noise free setting is evaluated. The results would be more relevant and would underline the contribution of the method more, if function evaluations would be noisy. In addition, this setting would also better represent real-world settings. \\n\\nIn the experimental evaluation, the choice of hyperparameters of the GP and the choice of $\\\\beta$ is unclear. While in the theoretical derivations, the RKHS-norm bound and the information gain are used, in the experiments, $\\\\beta = 2$ is used as a heuristic. This choice is common in other safe BO works; however, it is not made transparent in the paper and can only be found in the implementation. The use of this heuristic invalidates all proven safety guarantees. A detailed discussion on this can be found in Fiedler et al. 2024.\\n\\nThe choice and the long lengthscales in the kernel lead to safety violations. Typically, shorter lengthscales compared to the domain size are applied in safe BO applications. This limits performance while exploring but can lead to fewer safety violations. \\nIt would be interesting to know when safety violations occurred in the hardware experiments. I assume this is mostly in the first exploration stage.\\n\\nFor the hardware experiments, it is unclear what $T_0$ is and if there is even a switch in the exploitation stage. \\n\\n## Discussion\\n\\nThe safety violations that occur in the empirical results need to be more thoroughly discussed. In light of these, the claim in the conclusion that it \\\"can be seamlessly integrated into real-world complex control applications.\\\" is too confident, having observed 39 safety violations in the hardware experiments. \\n\\n\\n## Minor comments:\", \"line_191_and_194\": \"Definitions of safe set and expander sets in Sui 2015 and Sui 2018 are different\", \"line_206_207\": \"This statement is wrong. Berkenkamp 2016 uses a Matern kernel; generally, in many SafeBO applications Matern kernels with $\\\\nu = 3/2$ are used.\", \"line_333\": \"Typo: Lipschitz continuous\", \"line_317_318\": [\"Grammar in Theorem 5.1 and Theorem 5.2 \\\"with R-sub-Gaussian\\\"\", \"## References (not in the paper)\", \"Berlinet, A., & Thomas-Agnan, C. (2004). _Reproducing Kernel Hilbert Spaces in Probability and Statistics_. Springer Science & Business Media.\", \"Bottero, A., et al. (2022). Information-theoretic safe exploration with Gaussian processes. _Advances in Neural Information Processing Systems_, 35, 30707-30719.\", \"Chowdhury, S. R., & Gopalan, A. (2017). On kernelized multi-armed bandits. In _International Conference on Machine Learning_ (PMLR).\", \"Fiedler, C. (2023). Lipschitz and H\\u00f6lder continuity in reproducing kernel Hilbert spaces. _arXiv preprint arXiv:2310.18078_.\", \"Fiedler, C., Menn, J., Kreisk\\u00f6ther, L., & Trimpe, S. (2024). On safety in safe Bayesian optimization. _Transactions on Machine Learning Research_.\", \"Khosravi, M., et al. (2022). Safety-aware cascade controller tuning using constrained Bayesian optimization. _IEEE Transactions on Industrial Electronics_, 70(2), 2128-2138.\", \"Whitehouse, J., Ramdas, A., & Wu, S. Z. (2024). On the sublinear regret of GP-UCB. _Advances in Neural Information Processing Systems_, 36.\"], \"questions\": \"Please address the mentioned weaknesses.\", \"additional_questions_on_the_hardware_experiments\": [\"When does the exploration phase stop, when does the exploitation phase start?\", \"How were the GP hyperparameters chosen?\", \"Why and when do safety violations occur?\", \"How can $T_0$ be chosen in real-world applications?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper investigate a SafeControlBO method to optimize multiple controllers at the same time. It shows improvements addressing the challenges associated with tuning complex control systems that involve multiple parameters while keeping online safety.\", \"strengths_of_the_paper\": \"effectively optimization while ensuring safety. Its empirical validation demonstrating superior performance compared to some existing safe Bayesian optimization methods. The use of additive kernels enhances efficiency, making it suitable for complex control systems.\", \"weaknesses\": \"incremental theoretical improvements, hyperparameter tuning issues.\\nThe AC likes the paper in general, but the reviewers opinions make sense, this paper should be improved to reach the bar of ICLR.\", \"additional_comments_on_reviewer_discussion\": \"After evaluated the quite disparate scores by reviewers, the AC realized that the opinions by the reviewers are not as different as shown by the score. It is an interesting paper while still need substantial improvement before reaching the bar of ICLR 2025.\"}", "{\"title\": \"Reply to authors' response\", \"comment\": \"### Comment 1:\\n\\nI appreciate that the authors included related work by Bottero et al., Khoshravi et al., and Chowdhury et al.. \\n\\nHowever, the inclusion of related work and the discussion of existing results is still insufficient. \\n\\nFor example, the addition to the pdf in line 063-064 is not really fitting. Fiedler et al 2023 only concerns the relation of RKHS and H\\u00f6lder Continuity and Whitehouse et al provides insights on regret bounds in GP-UCB. In principle, this could be interpreted as \\\"theoretical analysis\\\", but these papers are not directly related to SafeOpt and should not be referenced in this part of the related work section. \\n\\nFurthermore, this becomes obvious in the comments: \\n\\n\\\"in practical applications, using $\\\\beta = 2$ does not compromise the safety guarantee\\\", the contrary is shown in Fiedler et al. 2024\\n\\nIn Comment 6, the authors claim that they use the same noise-free setting as Kirschner et al. (2019). However, in this paper, the synthetic experimental setting includes noise:\\n\\\"On all experiments we add Gaussian noise with standard deviation 0.2 to obtain a similar signal-noise ratio as on our real-world application.\\\" \\n\\n### Comment 2: \\n\\n1. This is still not a contribution\\n2. In this Theorem, only pointwise Lipschitz continuity and not continuity of the whole map is shown. In Line 912 only Lipschitz continuity in the first argument is shown, while this implies in some sense Lipschitz continuity of the kernel, it is not precise refer to Fiedler 2024 Lemma 4.1 and Prop. 4.4.\\n3. In Whitehouse et al. 2024 the main point is that tighter confidence bounds are provided that lead to less conservative SafeOpt-type algorithms, which could also be used in practice (with some assumptions) instead of using $\\\\beta = 2$\\n\\n### Comment 3:\\n\\nI appreciate the additional discussion on hyperparameter choices. A critical aspect in SafeBO algorithms, is in fact the choice of kernel hyperparameters (Fiedler et al. 2024) as overestimation of lengthscales leads to safety violations. The kernel hyperparameters are therefore critical hyperparameters for safety which are not trivial to choose in practice. \\nWith T0, an additional hyperparameter compared to Bottero et al. 2022, Berkenkamp et al. 2023 is introduced.\", \"comment_of_the_authors\": \"*Although we employed a more complex*\\n*approach to construct in Theorems 5.1 and 5.2 for comparison with Sui et al. (2018) in practical applications using $\\\\beta = 2$ does not compromise the safety guarantee.* \\n\\nThe choice of $\\\\beta = 2$ is a heuristic that invalidates all safety guarantees; while it can work in practice as shown in previous works, it rather encourages cautious exploration than providing any guarantee. Even in a well-specified setting with a correct kernel choice, $\\\\beta =2$ can lead to many safety violations see Fiedler et al. 2024. \\n\\nAs other works also use this setting in their experiments, this choice can be made, but it is still a critical design choice, which should be transparent in the paper and not only be found in the code.\\n\\n### Comment 4: \\n\\nWe appreciate the discussion and this is in fact an interesting insight. However, I agree with Reviewer hSAs that the system might be not a good example for safe Bayesian optimization methods. \\n\\n### Comment 5: \\n\\nThe added discussion in the conclusion illustrates the issue nicely. \\n\\n### Comment 6:\\n\\nSee Comment 1; the setting in Kirschner et al. 2019 is not noise-free. \\n\\n\\n\\nWhile the changes in the paper are appreciated, I will not change my score.\"}", "{\"summary\": \"The authors propse a novel algorithm for safe Bayesian optimization that exploits purported properties of the squared-exponential kernel to facilitate the expansion of the safe set. The proposed idea is interesting and the authors report good theoretical results. However, the theoretical exposition is poor and the corresponding proofs are either incomplete or incorrect. Though I recommend rejection for this reason, I am open to changing my score if the authors improve the paper accordingly.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-motivated and written in a very clear form. It also provides an adequate review of related works. The proposed algorithm is an interesting contribution and shows good empirical results.\", \"weaknesses\": \"The theoretical results are mostly stated usinng plain text instead of mathematical formulas, which makes them somewhat ambiguous and hard to understand at times.\\n\\nThe proofs of the theoretical results are incomplete and potentially wrong.\", \"questions\": \"The statement \\\"Srinivas et al. (2010) demonstrated that BO methods can converge to the global optimum of unknown performance functions in fewer steps compared to genetic algorithm.\\\" is incorrect. Nothing in the paper compares their approach to a genetic algorithm, and there is no discussion on this.\\n\\nSimilarly, the statement \\\"However, SAFEOPT uses Gaussian kernels as the covariance function of the Gaussian processes, which is effective mainly for low-dimensional problems (Bengio et al., 2005)\\\" is inaccurate. Berrkenkamp et al. (2016) use a Matern kernel and there is nothing in Sui et al. (2015) that indicates that a squared-exponential kernel is necessary.\\n\\nIs equation (1) correct? Shouldn't the last term have t in the subscript instead of k? The same holds for equation 2.\\n\\nI feel that theorems 4.1 and 4.2 are mostly trivial and should be placed in the appendix.\\n\\nDoes line 5 in the pseudocode simply mean that the algorithm picks the point with maximal variance in B_n? If so, why not just write sigma insteadl of un-ln?\\n\\nThe presentation of outermost evaluated safe points is somewhat confusing. Is a_{oes} in the dataset? Would it help to introduce the data set \\\\mathcal{D}_n and to write a_{oes} \\\\in \\\\mathcal{D}_n?\\n\\nTheorems 5.1 and 5.2 are unclear to me. What do the authors mean by the \\\"maximum allowable uncertainty for the exploration to converge to an ( \\\\epsilon )-reachable safe region.\\\" and \\\"Maximum allowable uncertainty for the performance function to converge to a ( \\\\zeta )-optimal function value.\\\"?\\n\\nIn the proof of Lemma C.1, the step from line 878 to 880 is incomplete. To show that the posterior variance is indeed increasing, the authors also need to show that [K_t^{-1} k_t(x)]_i is positive, which the authors do not do.\\n\\nI might have missed something, but I am not sure if the statement in line 894 holds. Take, for example, the points x_sb =1, x_oes = 0 and x_i = 10. For \\\\lambda=0, we have || x(\\\\lambda) - x_i ||_2 = 10 > || x_sb - x_oes||_2 = 1, yet the inner product (x(\\\\lambda) - x_i ) (x_sb-x_oes) = (0-10 ) (1-0) <0 is negative.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the time and effort dedicated to evaluating our manuscript. Your perceptive comments and valuable suggestions have greatly improved the quality and clarity of our paper. Below, we address the concerns raised and respond to the reviewer\\u2019s questions. The corresponding modifications in the paper have been highlighted in brown.\\n\\n---\\n\\n### **Comment 1**\\n\\n> \\u201cThe main motivation for introducing the new kernel function is its superior in high dimensions. However, all evaluations in the paper are still pretty low-dimensional, with dimensions up to 10.\\u201d \\n> \\u201cCan you include an experiment with a truly high-dimensional system?\\u201d\\n\\n**Response:** \\nAs outlined in lines 083-084 of the paper, the control problems studied here lie within the low-to-moderate dimensional range, as opposed to the high-dimensional settings considered in [1] or [2]. The experimental results demonstrate that, at the problem scale studied in this work, traditional safe Bayesian optimization algorithms are prone to being trapped in local optima, whereas high-dimensional safe Bayesian optimization methods, such as LineBO, require more iterations to achieve competitive performance compared to the proposed method.\\n\\nThe limitations of our method for \\\"truly high-dimensional problems\\\" are also discussed in this paper. As described in lines 536-539 of the paper (currently in lines 508-516 in the updated PDF), the number of orders of additive Gaussian kernels equals the dimension of the problem. For instance, a 100-dimensional problem would involve 100 orders of additive kernels. Designing and combining these kernel components effectively requires extensive domain knowledge and substantial experimental effort. Furthermore, the computational cost associated with these combinations becomes a significant barrier to scaling this approach to very high-dimensional problems. Similar insights are drawn in Sections 3.4 and 4 of [3].\\n\\n---\\n\\n### **Comment 2**\\n\\n> \\u201cIn fact, in the related work it is mentioned that existing safe BO approaches work with systems where \\\"three controller [...] each controller having only two parameters\\\". However, the proposed algorithm is experimentally evaluated on the exact same numbers of parameters.\\u201d\\n\\n**Response:** \\nThe description of the related work [4] in this question is in lines 066-067 of the paper. In the related work, the quadrotor is equipped with three motion controllers, each responsible for controlling movement along one of the \\\\(x\\\\), \\\\(y\\\\), and \\\\(z\\\\) axes. Each axis has two control parameters: proportional gain and derivative gain. Importantly, the performance functions associated with the three axes are independent, meaning adjustments to the \\\\(x\\\\)-axis controller do not influence the performance of the \\\\(y\\\\) or \\\\(z\\\\) axes.\", \"in_short\": \"- Only one performance function, influenced by 6 parameters, so **6 parameters are tuned simultaneously**.\\n\\n---\\n\\n### **References**\\n\\n1. Johannes Kirschner, Mojmir Mutny, Nicole Hiller, Rasmus Ischebeck, and Andreas Krause. Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces. In *Proc. of the 36th International Conference on Machine Learning (ICML)*, pp. 3429\\u20133438, 2019.\\n2. Anthony Bardou, Patrick Thiran, and Thomas Begin. Relaxing the additivity constraints in decentralized no-regret high-dimensional Bayesian optimization. In *ICLR \\u201924: International Conference on Learning Representations (ICLR)*, 2024.\\n3. David K Duvenaud, Hannes Nickisch, and Carl Rasmussen. Additive Gaussian processes. In *Advances in Neural Information Processing Systems*, volume 24, 2011.\\n4. Felix Berkenkamp, Angela P. Schoellig, and Andreas Krause. Safe controller optimization for quadrotors with Gaussian processes. In *Proc. of 2016 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 491\\u2013496, Stockholm, Sweden, 2016.\"}", "{\"title\": \"Further explanation with additional experimental results\", \"comment\": \"We are very grateful for the reviewer\\u2019s response.\\n\\n---\\n### **Noisy Benchmark Environments**\\n\\nFirst, we would like to address the issue of the noisy environment. Although the experiment is not yet fully completed, we have included the existing results in the updated PDF; please refer to Appendix F. Since the optimization results of DuMBO and SafeCtrlBO are already very close to the maximum value of the function (under noise-free conditions), even Gaussian noise with a standard deviation of 0.0001 leads to negative regret in the Camelback function. In the Hartmann and Gaussian environments, the experimental results are very similar to those in the noise-free cases.\\n\\nRegarding the noisy setting in Kirschner et al. (2019), although the paper claims that Gaussian noise with a standard deviation of 0.2 is added, the official implementation uses the parameter setting ``noise_obs_mode: none''. Under this setting, the experimental results we obtained are consistent with those reported by Kirschner et al. (2019). However, when the standard deviation is set to 0.2 (indeed a large value), our reproduction results indicate that simple regret becomes negative, which differs from the results in Kirschner et al. (2019). In addition, the results of SwarmSafeOpt reported by Kirschner et al. (2019) are significantly lower than our reproduced results (in our reproduction, SwarmSafeOpt is comparable with LineBO). This suggests that Kirschner et al. (2019) may have used undeclared parameter choices.\\n\\n---\\n### **Motor Cascade Control Example**\\nRegarding the motor used in the experiment, we had further discussions with reviewer hSAs, and then reviewer hSAs raised the score. Reviewer hSAs initially thought that the motor would not encounter unsafe situations, but we explained that such cases did occur during the experiment. Currently, reviewer hSAs suggests that the FOC method for the motor does not require iterative adjustment. We provided reasonable explanations for this point as well, which are reflected in our latest conversation.\\n\\n---\\n### **Hyperparameters**\\n\\nWe are very grateful for the reviewer\\u2019s explanation regarding the choice of hyperparameters. However, keeping the hyperparameters consistent with the baseline methods (Kirschner et al. (2019) and Bardou et al. (2024)) ensures a fair comparison. For the additional hyperparameter $ T_0 $, we have also explained the rationale behind its selection. We have added these contents to the updated PDF, please refer to Appendix B.\"}", "{\"comment\": \"We thank the reviewer for the detailed review and are very grateful for the kind appreciation of the paper. We address the weaknesses raised by the reviewer and answer the reviewer's questions below. The corresponding modifications in the paper have been highlighted in cyan.\\n\\n---\\n\\n### **Comment 1**\\n\\n> \\u201cThe experimental results on the physical system resulted in a significant number of constraint violations (39 for the method proposed by the authors). What are the consequences of violations of constraints in this test application? Do they make the algorithm unsafe for use in practice?\\u201d\\n\\n**Response:** \\nIn the hardware experiment of our method, there are in total 39 constraint violations in 5 runs. From the implementation details provided in the \\\"SpeedGoat\\\\_RT\\\\_AdditiveBO\\\\_exp.ipynb\\\" file, we observe that most constraint violations pertain to the first constraint function, where the performance value is lower than the preset threshold. The first constraint is a soft constraint, and a violation indicates that the performance is below expectations, such as exhibiting a large overshoot or a slower transient response. Importantly, this violation does not imply that the system is unsafe. The constraint could be set to $ -\\\\infty $ to ensure no violations; however, this may result in reduced optimization efficiency. \\n\\nA smaller number of violations involve the second constraint function, which represents the steady-state error being lower than the preset threshold. Such a violation implies that the steady-state value of the motor speed deviates significantly from the desired setpoint, e.g., requiring the motor to stabilize at $ 100 \\\\text{rad/s} $, but it instead stabilizes at $ 90 \\\\text{rad/s} $. Similar to the first constraint, this violation does not indicate system unsafety. Instead, it is introduced to enhance the optimization efficiency of the algorithm by discouraging the selection of parameters that lead to high steady-state errors. \\n\\nThe third constraint ensures that the value of the function representing signal safety remains above the preset safe threshold throughout the optimization process, thereby guaranteeing the safety of the system during optimization.\\n\\n---\\n\\n### **Comment 2**\\n\\n> \\u201cIs there a parameter that can be changed to reduce the number of violations? Does that result in a trade-off between performance and number of constraint violations?\\u201d\\n\\n**Response:** \\nYes, there are parameters that can be adjusted. The choice of kernel hyperparameters and the weight factors in equations (7) and (8) will influence the number of constraint violations. By adjusting the weight factors in equations (7) and (8), we can control the smoothness of the performance function and constraint functions. Similarly, the variance and lengthscale of the base kernels allow us to adjust the smoothness of the function generated by the Gaussian Process. Ensuring appropriate smoothness for the performance function and constraint functions can satisfy the conditions of Theorems 5.1 and 5.2, leading to a high probability (not 100%, as similar analyses are made in [1], and constraint violations are reported in [2]) that the optimization process avoids violating the constraints. \\n\\nHowever, conservative settings will inevitably reduce optimization efficiency, representing the trade-off mentioned by the reviewer. As noted in the implementation details (\\\"SafeCtrlBO\\\\_Hartmann6D.ipynb\\\"), using only the first-order additive kernel leads to more aggressive iterations, achieving the highest optimization efficiency but potentially violating constraints. Conversely, using only the highest-order additive kernel reduces optimization efficiency but ensures that the constraints are not violated with high probability. It is also possible to optimize kernel hyperparameters to ensure that all kernels avoid constraint violations with high probability. However, due to the trade-off, this adjustment also results in reduced optimization efficiency.\", \"the_synthetic_function_benchmark_results_further_corroborate_this\": \"the DuMBO algorithm exhibits the highest optimization efficiency among compared methods, but it violates constraints frequently, aligning with our conjecture.\\n\\n---\\n\\n### **References**\\n\\n1. Yanan Sui, Vincent Zhuang, Joel Burdick, and Yisong Yue. Stagewise safe bayesian optimization with gaussian processes. In *Proc. of the 35th International Conference on Machine Learning (ICML)*, pp. 4781\\u20134789, Stockholm, Sweden, 2018. \\n2. Mohammad Khosravi, Christopher Konig, Markus Maier, Roy S. Smith, John Lygeros, and Alisa Rupenyan. Safety-aware cascade controller tuning using constrained bayesian optimization. *IEEE Transactions on Industrial Electronics*, 70(2):2128\\u20132138, 2023.\"}", "{\"title\": \"Summary of Official Comments by Authors\", \"comment\": \"We sincerely thank each reviewer for their valuable comments. We have revised the paper based on all the feedback provided.\\n\\nFirst, we thank the reviewers for recognizing the ideas and experimental results presented in the paper. Reviewer R4HB expressed particular interest in the theoretical aspects. We provided detailed responses and revised the paper accordingly, especially updating Theorem D.3, which aligns well with the results of the ablation study in Appendix E. We believe this revision significantly enhances the quality of the paper.\\n\\nReviewer QXk1 expressed concerns about the selection of hyperparameters in the experiments. We addressed this by combining insights from relevant literature and the specific experimental setup, and we incorporated these explanations into the updated paper. Additionally, we included benchmark experiments in noisy environments as requested, further improving the experimental results.\\n\\nReviewers zLs8 and QXk1 expressed concerns about constraint violations in the hardware experiments. We provided detailed explanations and included these clarifications in the paper. Reviewer hSAs raised concerns regarding the dimensionality of the research problem and offered suggestions on the experimental equipment. We addressed these points in detail and incorporated the necessary revisions, and we are pleased to note that we have successfully clarified reviewer hSAs's questions.\\n\\nWe believe that these revisions have greatly improved the paper's readability and made the contributions of this work much clearer.\"}", "{\"comment\": \"### **Comment 2**\\n\\n> \\u201c(W2) Theoretical results are partially imprecise. Some of the results have been known before and build on previous results, yet these are not cited.\\u201d\\n\\n**Response:** \\nWe will discuss the comments in this section one by one.\\n\\n1. **\\u201cLemma 4.1 is a known result that can be found in e.g., Berlinet and Agnan-Thomas 2004 Theorem 5.\\u201d** \\n We carefully compare our proof with Theorem 5 in Berlinet and Agnan-Thomas (2004) (referred to as \\u201cTheorem 5\\u201d below). On the surface, our proof is a special case of Theorem 5, where the kernels are specific (Gaussian) and there are $ d $ kernels instead of two. From a deeper analysis of the proof process:\\n - We explicitly and mathematically demonstrate that $ K $ is positive definite by showing that the sum of positive definite kernels remains positive definite. Theorem 5 relies on the property that the sum of reproducing kernels is itself positive definite.\\n - Regarding the construction of inner product and norm in $ \\\\mathcal{H} $, in our case, since the functions $ f_i $ depend on different variables $ x_i $, the RKHSs $ \\\\mathcal{H}_i $ are orthogonal, and the minimal norm simplifies to the sum of norms. Theorem 5 accommodates cases where $ \\\\mathcal{H}_1 $ and $ \\\\mathcal{H}_2 $ may not be orthogonal, hence the need for the minimal norm.\\n - In the proof of reproducing property, our proof leverages the explicit form of the Gaussian kernels and their reproducing properties, while Theorem 5 demonstrates that the reproducing property holds using the mappings $ v $ and $ v^{-1} $, and properties of $ \\\\mathcal{N} $ and $ \\\\mathcal{F} $.\\n \\n In conclusion, our proof is specific to Gaussian kernels and orthogonal RKHSs, while Theorem 5 is general. However, our proof is more constructive and computational, while Theorem 5 uses abstract functional analysis tools. Although the conclusions are similar, the proofs are very different.\\n\\n2. **\\u201cTheorem 4.2 is imprecise; further information on this can be found in Fiedler 2023 (Section 4).\\u201d** \\n We have carefully reviewed Section 4 in Fiedler (2023), but did not find any conclusions or proofs related to the Lipschitz-continuity of additive Gaussian kernels. We would greatly appreciate it if the reviewer could point out any inaccuracies in the proof of Theorem 4.2 of our paper, similar to the comments provided by reviewer R4HB.\\n\\n3. **\\u201cThe theoretical results section clearly builds on Chowdhury and Gopalan 2017, Theorem 2.\\u201d** \\n The reviewer refers to the premises of Theorems 5.1 and 5.2 from the paper by Chowdhury and Gopalan (2017). In fact, these premises are derived from Theorems 1 and 2 in Sui et al. (2018), which we have included in our citations. Theorems 1 and 2 in Sui et al. (2018) themselves reference the proof of Theorem 2 in Chowdhury and Gopalan (2017). We acknowledge that we only cited Sui et al. (2018) and not Chowdhury and Gopalan (2017). We thank the reviewer for pointing this out and helping us improve the citation of the paper.\\n\\n4. **\\u201cThere are more recent results [\\u2026] can be found in Whitehouse et al. 2024.\\u201d** \\n We use the proof based on maximum information gain to facilitate a more intuitive comparison with previous safe Bayesian optimization algorithms, such as Sui et al. (2018). Nevertheless, we thank the reviewers for recommending new theoretical articles, which offer potential directions for advancing future research.\\n\\n5. **\\u201cTheorem 5.1: The RKHS norm of the safety functions is bounded by $ B $, not the safety functions themselves.\\u201d** \\n We thank the reviewer for pointing out this typo. We have revised the description of Theorems 5.1 and 5.2 accordingly, combining the suggestions from reviewer R4HB.\"}", "{\"title\": \"Constraint violations explained\", \"comment\": \"Thank you for explaining the nature of the constraint violations resulting from the application of the method. It appears that they are not safety related, but only failures to reach desired performance targets. This explanation is likely to increase the appeal of the method to practitioners who otherwise might be scared away by the phrase \\\"constraint violation\\\". I maintain my rating for the paper and recommend acceptance.\"}", "{\"comment\": \"Thank you very much for your response, and we sincerely appreciate your comments on improving the comprehensibility of the experimental section of this paper.\"}", "{\"comment\": \"We thank the reviewer for the detailed review and affirmation of the paper's contribution, as well as the help in improving the paper's references. We address the weaknesses raised by the reviewer and answer the reviewer's questions below. The corresponding modifications in the paper have been highlighted in purple.\\n\\n---\\n\\n### **Comment 1**\\n\\n> \\u201c(W1) The embedding of related work is insufficient. There are lines of work, which seem closely related, but are not cited appropriately.\\u201d\\n\\n**Response:** \\nWe thank the reviewer for providing the articles, which we will discuss one by one and add to the updated PDF.\\n\\n1. **The Information-Theoretic Safe Exploration (ISE) algorithm by Bottero et al. (2022):** \\n ISE replaces uncertainty-driven exploration with a more principled information-theoretic approach. By directly maximizing information gain, ISE achieves better data efficiency compared to SafeOpt, especially in settings with heteroskedastic noise where uncertainty measures alone are insufficient. ISE is designed for continuous domains and does not require discretization of the parameter space, which enhances its high-dimensional optimization capability compared to SafeOpt. However, in recent work from NeurIPS 2024, H\\u00fcbotter et al. (2024) reported that ISE \\\"leads to significantly worse performance on the simplest of problems\\\".\\n We are currently reimplementing the algorithm and will try to test it on our real experiment.\\n\\n2. **The article by Khosravi et al. (2023):** \\n This work aligns with the SafeOpt framework at the algorithmic level and serves as an application of SafeOpt in the industrial domain. However, it introduces several experimental innovations:\\n - **Barrier-like penalty term:** The cost function incorporates a safety metric that penalizes gains approaching unsafe thresholds by utilizing experimentally determined unsafe controller parameters.\\n - **Robust stopping criterion:** They propose a stopping criterion based on the constrained expected improvement (CEI). The algorithm terminates when the ratio between the current and maximum CEI across iterations remains below a predefined threshold for three consecutive iterations.\\n - **Critical gain detection:** Driven by experiments, this step employs Fourier analysis to preemptively identify unstable parameter ranges, ensuring system stability.\\n\\n---\\n\\n### **References**\\n\\n- Jonas H\\u00fcbotter, Bhavya Sukhija, Lenart Treven, Yarden As, and Andreas Krause. Transductive Active Learning: Theory and Applications. [https://openreview.net/forum?id=tZtepJBtHg&referrer=%5Bthe%20profile%20of%20Jonas%20H%C3%BCbotter%5D(%2Fprofile%3Fid%3D~Jonas_H%C3%BCbotter1)](https://openreview.net/forum?id=tZtepJBtHg&referrer=%5Bthe%20profile%20of%20Jonas%20H%C3%BCbotter%5D(%2Fprofile%3Fid%3D~Jonas_H%C3%BCbotter1))\"}", "{\"comment\": \"We are very grateful for the reviewer's response!\\n\\nFor most drive motors, as long as parameters such as stator resistance, d-q axis inductance $ L_d $, $ L_q $, rotor inertia, number of pole pairs, and others are known, the control parameters of the three PI controllers in FOC can be calculated using established formulas (derived from the approximate linear model). However, due to the limitations of linear approximations, the control parameters obtained in this manner typically result in nearly satisfactory motor performance but are not \\\"optimal\\\". In this experiment, we do not have precise values for these motor parameters, but our goal is to determine control parameters that optimize the current task's performance. Thus, the parameter tuning process is iterative.\\n\\nHowever, we sincerely appreciate the future research direction suggested by the reviewer. In future work, we plan to explore more complex and widely used robotic systems, such as gantry systems, and to conduct simulation experiments on large power systems.\"}", "{\"comment\": \"### **Comment 5**\\n\\n> \\u201cDoes line 5 in the pseudocode simply mean that the algorithm picks the point with maximal variance in $ B_n $? If so, why not just write $ \\\\sigma $ instead of $ u_n - l_n $? \\u201d\\n\\n**Response:** \\nYes, line 5 of the pseudocode represents the acquisition function used during the safe exploration stage, which selects the point in $ B_n $ where the performance function exhibits the greatest uncertainty. While this could be expressed directly in terms of $ \\\\sigma $, we chose to write it in the form of the upper confidence bound minus the lower confidence bound to maintain consistency with line 11 and to emphasize the distinction between the acquisition functions used in the exploration and exploitation phases.\\n\\n---\\n\\n### **Comment 6**\\n\\n> \\u201cIs $ a_{\\\\text{oes}} $ in the dataset? Would it help to introduce the dataset $ \\\\mathcal{D}_n $ and to write $ a_{\\\\text{oes}} \\\\in \\\\mathcal{D}_n $? \\u201d\\n\\n**Response:** \\nIf the \\\"dataset\\\" referred to by the reviewer corresponds to the set of observations $ D_n $, then $ a_{\\\\text{oes}} \\\\in D_n $. According to line 257, $ a_{\\\\text{oes}} \\\\in S_n^{\\\\text{eval}} $, where $ S_n^{\\\\text{eval}} \\\\subseteq S_n $ and $ S_n^{\\\\text{eval}} $ contains all the evaluated points in $ S_n $. If all points in $ D_n $ are safe observations, then $ D_n = S_n^{\\\\text{eval}} \\\\subseteq S_n $. \\n$ a_{\\\\text{oes}} $ represents the point(s) in $ S_n^{\\\\text{eval}} $ that is closest to a specific $ a_{\\\\text{sb}} $. For example, in a 1D case, suppose the current $ S_n^{\\\\text{eval}} = {-1, 2, 3} $ and $ a_{\\\\text{sb}} = -3 $ and $ 5 $. Then, $ a_{\\\\text{oes}} = -1 $ and $ 3 $, and the outermost region is $ [-3, -1] \\\\cup [3, 5] $.\\n\\n---\\n\\n### **Comment 7**\\n\\n> \\u201cWhat do the authors mean by the \\\"maximum allowable uncertainty for the exploration to converge to an $ \\\\epsilon $-reachable safe region.\\\" and \\\"Maximum allowable uncertainty for the performance function to converge to a $ \\\\zeta $-optimal function value.\\\"?\\u201d\\n\\n**Response:** \\nThe maximum allowable uncertainty, $ \\\\epsilon $, for the exploration to converge to an $ \\\\epsilon $-reachable safe region is a very small positive number used to define the stopping criterion for the exploration stage. During the early stages of the safe exploration, the safe set $ S_t $ is small, the outermost region $ O_t $ is large, and the uncertainty $ \\\\sigma_t $ in the performance function and safety function values $ J $ and $ G_i $ at most safe boundary points $ a_{\\\\text{sb}} $ in $ B_t $ is high. \\n\\nAs the safe exploration progresses, the safe set $ S_t $ gradually expands, and the uncertainty in $ J $ and $ G_i $ for each $ a_{\\\\text{sb}} $ decreases. By a certain iteration $ t^* $, the uncertainty in $ J $ and $ G_i $ for all $ a_{\\\\text{sb}} $ becomes less than or equal to a predefined small positive number $ \\\\epsilon $. This indicates that the safe region (the region of the performance function and safety function values above the safe threshold) has been sufficiently explored, marking the end of the safe exploration phase. \\n\\nSimilarly, the maximum allowable uncertainty, $ \\\\zeta $, for the performance function to converge to a $ \\\\zeta $-optimal function value is a very small positive number used to define the signal that the exploitation stage has converged to the optimal performance function value. At the beginning of the exploitation stage, the difference between the optimized best function value $ f(a_{\\\\text{opt}}) $ and the theoretical optimal value $ f(a^*) $ is large. \\n\\nAs exploitation progresses, $ f(a_{\\\\text{opt}}) $ gradually approaches $ f(a^*) $, so the simple regret $ r_t = f(a^*) - f(a_{\\\\text{opt}}) $ decreases. After a certain iteration $ t^* $, $ r_t $ becomes less than or equal to a predefined small positive number $ \\\\zeta $, indicating that the difference between $ f(a_{\\\\text{opt}}) $ and $ f(a^*) $ is sufficiently small. At this point, it can be considered that the optimal value of the function has been effectively obtained.\\n\\n---\"}", "{\"comment\": \"### **Comment 4**\\n\\n> \\u201c(W4) Safety violations in hardware experiments are not adequately discussed -- Why and when do safety violations occur?\\u201d\\n\\n**Response:** \\nWe thank the reviewer for this insightful comment. Similar questions were raised in the comments of reviewer zLs8, to which we provided detailed responses that can be referenced. In summary, the 39 constraint violations in 5 runs observed in the hardware experiments did not render the system unsafe or unstable. Rather, they represent the exploration of poorly performing controllers, such as those with large overshoot, slow transient response, or unacceptable steady-state error. \\n\\nThese violations primarily occurred during the exploration phase. This can be reduced by adjusting the hyperparameters of the base kernels or modifying the performance function to make it smoother. However, such adjustments may require more iterations, which could increase computational costs in practical applications. \\n\\nMoreover, compared to the baseline safe Bayesian optimization algorithms included in the experiments, our method exhibited the lowest number of constraint violations under similar hyperparameter settings. As noted by Sui et al. (2018), safe BO aims to ensure safety with high probability rather than guaranteeing 100% safety. Therefore, all safe BO methods based on confidence intervals inherently have a probability of constraint violation, regardless of how sophisticated the hyperparameter design may be. For example, constraint violations are also reported in Khosravi et al. (2022).\"}", "{\"comment\": \"### **Comment 3**\\n\\n> \\u201c(W3) In the empirical study, information is missing on how to apply the algorithm and how hyperparameters are chosen. The choice of heuristics is not sufficiently documented and discussed.\\u201d\\n\\n**Response:** \\nAs demonstrated in the implementation details, similar to SafeOpt, users only need to define the performance function and constraint functions, set the (additive) kernels according to the problem dimension, and then directly call the algorithm for optimization. In contrast, many other safe Bayesian optimization algorithms involve a more intricate setup process before they can be applied to new datasets or complex systems. \\n\\nAs for the selection of various hyperparameters, similar to other Bayesian optimization algorithms and deep learning methods, to the best of our knowledge, it remains an open problem. Although alternative approaches are discussed in Duvenaud et al. (2011) and Fiducioso et al. (2019), their application to practical tasks is often more complex. \\n\\nIn line with Kirschner et al. (2019), we manually selected reasonable values for the hyperparameters of the methods. Similarly, we did not conduct an exhaustive hyperparameter search. We will provide a detailed discussion of the hyperparameters used in this paper and the rationale behind their selection.\\n\\n---\\n\\n1. **Choice of $ T_0 $** \\n $ T_0 $ represents the number of iterations in the safe exploration stage. According to Theorem 5.1, $ T_0 $ is theoretically finite. However, in the practical control optimization tasks discussed in the paper, excessive exploration is unrealistic. For instance, in our hardware experiment, 100 iterations required approximately 5 hours, with most of the time spent on motor performance testing. In this case, we selected $ T_0 = 15 $, and our controller achieved stable and good performance after about 45 iterations. \\n\\n A similar analysis was conducted by Bardou et al. (2024) in their author response. Although their paper discussed asymptotic behaviours, they limited the total number of iterations to the range between 100 and 200.\\n\\n---\\n\\n2. **Choice of variance and lengthscales in base kernels, and the choice of $ \\\\beta $.** \\n The selection of Gaussian Process (GP) hyperparameters in our paper was primarily aimed at aligning with the baseline experiments to enable a fair comparison. We compared six baseline algorithms in the paper, three of which were from Kirschner et al. (2019). To ensure consistency, we used the same hyperparameters where possible, including the choice of $ \\\\beta = 2 $. Although we employed a more complex approach to construct $ \\\\beta $ in Theorems 5.1 and 5.2 for comparison with Sui et al. (2018), in practical applications, using $ \\\\beta = 2 $ does not compromise the safety guarantee. This is because, regardless of the choice of $ \\\\beta $, we enforce $ \\\\ln > h $ to ensure high-probability safety. \\n\\n In the hardware experiments, certain field-oriented control domain knowledge informed the selection of the variance and lengthscale of the base kernels. Field-oriented control comprises three loops, with varying impacts on motor response:\\n - The $ P $-gain and $ I $-gain of the speed loop have the greatest impact on motor response.\\n - The $ P $-gain and $ I $-gain of the $ q $-axis current loop have a moderate impact.\\n - The $ P $-gain and $ I $-gain of the $ i $-axis current loop, as the reference current remains at zero, have the smallest impact and primarily affect signal safety. \\n\\n Accordingly, we set the variances of $ k_1 $ and $ k_2 $ to be higher, allowing the GP to model larger variations in the performance function along these dimensions. Conversely, the variances of $ k_5 $ and $ k_6 $ were set lower, reflecting their lesser contribution to the overall model. Since the optimization results were already excellent under this configuration, we did not further fine-tune the lengthscales. However, if fine-tuning were necessary, we believe that relatively smaller lengthscales should be selected for $ k_1 $ and $ k_2 $, and larger lengthscales for $ k_5 $ and $ k_6 $. Additionally, functions such as `kernel.variance.set_prior()` or `kernel.lengthscale.set_prior()` could be used for real-time fine-tuning.\"}", "{\"summary\": \"The paper describes a method for safe Bayesian optimization suitable for control applications where safety concerns limit the set of parameters that can be tried. The main contribution is the use of additive kernels in safe BO, resulting in faster optimization. This contribution is of major significance when applying BO to physical systems, where control trials are slow and costly to perform. A second contribution is the speedup of the computation of the expander set, affecting the computational time of the algorithm. The proposed method has been verified on benchmark functions in simulation, as well as on a challenging control problem involving a physical set-up consisting of nested controllers regulating the velocity of a permanent magnet synchronous motor under field-oriented control.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The proposed method for using additive kernels in safety-aware BO leads to optimization in fewer trials, which is of great practical significance for physical control systems with safety limitations. The paper is written clearly, references prior work in the field, and explains very well the connections of the proposed method to that work. Although safe BO and additive functions are not novel ideas individually, their combination is original and based on significant technical insight. Properties of the proposed additive kernels are analyzed well theoretically. A novel acquisition function is proposed that is faster to compute than previously proposed ones.\\n\\nFurthermore, the method is tested on a physical control system set-up that is of real practical use. Because the optimized parameters of the controllers are their proportional and integral gains, the proposed method could potentially find widespread use in industrial practice, where the use of PI controllers is very common and their tuning is known to be notoriously tricky and laborious.\", \"weaknesses\": \"The paper is somewhat incremental in the line of research on applying BO to safety-constrained control systems, and combines known ideas. Nevertheless, this combination required a significant technical insight, so it is far from obvious or trivial.\\n\\nThe experimental results on the physical system resulted in a significant number of constraint violations (39 for the method proposed by the authors). It is not clear what the consequences of these violations are in practice.\", \"questions\": \"What are the consequences of violations of constraints in this test application? Do they make the algorithm unsafe for use in practice? Is there a parameter that can be changed to reduce the number of violations? Does that result in a trade-off between performance and number of constraint violations, and if yes, how can this trade-off be handled?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We sincerely thank the reviewer for the very careful review. To reduce confusion, we have rewritten the proof as Lemma D.2 and Theorem D.3. Please refer to the updated PDF. We added a premise that Theorem D.3 holds when $ O_n $ is sufficiently large, corresponding to the early safe exploration stage. In the later stage of safe exploration, or when the problem dimension increases, $ O_n $ becomes smaller accordingly, and the result of Theorem D.3 may become inaccurate. This may explain the results of the ablation study in Appendix E, where SafeCtrlBO gradually outperforms StageOpt in 6D and 10D as the number of iterations increases when the same kernel is used.\\n\\nThe key difference between SafeCtrlBO and StageOpt in implementation is that SafeCtrlBO uses the additive Gaussian kernel and employs a new acquisition function. If, as we have proven earlier, the acquisition results of the new acquisition function are equivalent to those in StageOpt, then the results of SafeCtrlBO in the ablation study should be nearly the same as those of StageOpt (because here the additive Gaussian kernel is not used). This was initially a source of confusion for us as well. We thank the reviewer for raising this question, which prompts us to revisit the functionality of our proposed acquisition function. We hope that this clarification highlights the contributions of our paper more effectively.\"}", "{\"comment\": \"### **Comment 3**\\n\\n> \\u201cHow can this trade-off be handled?\\u201d\\n\\n**Response:** \\nHow to manage this trade-off depends on the specific task requirements. It requires a comprehensive optimization of multiple aspects of the task. For instance: \\n- **Iteration tolerance**: How many iterations can the task afford? In systems where each iteration runs quickly and a large number of iterations do not cause significant wear, a relatively conservative strategy can be adopted. This reduces optimization efficiency but enhances safety guarantees. \\n- **Safety requirements**: How stringent are the safety requirements? For example, the control optimization process of a drone system may have high safety requirements, as unsafe control parameters may result in crashes or collisions. In such cases, safety must be guaranteed even at the cost of reduced optimization efficiency. Conversely, in automotive motor control optimization, safety requirements are relatively lower. Some poorly performing control parameters may require human intervention to early terminate their operation, but in most cases, they will not cause direct damage. Therefore, as set in the hardware experiment, we can appropriately relax the constraints on tracking performance while ensuring signal safety to improve optimization efficiency. \\n\\n---\\n\\nWe thank the reviewer for raising these questions, which helped us identify our problem in the explanation of the experimental results section. We have revised and improved it accordingly, and the modifications have been highlighted in cyan in the updated PDF. We hope these changes enhance the clarity and comprehensibility of the experimental results. If any issues or questions remain, please feel free to bring them up. We would be happy to provide further clarifications.\"}", "{\"summary\": \"In this paper, the authors propose a safe Bayesian optimization framework utilizing Gaussian processes with additive squared exponential functions. The motivation for the new kernel function is the poor performance of the \\\"standard\\\" SE kernel for high-dimensional spaces. The authors argue that using BO for controller tuning often requires operating in high-dimensional spaces due to a large number of parameters of the controllers to be considered. It is experimentally evaluated that the proposed algorithm, SafeCtrlBo, can lead to better results than other safe BO algorithms. Furthermore, the authors present theoretical results on the finite-time convergence of the algorithm.\\n\\nThe main contribution is utilizing additive SE-kernels in a Bayesian optimization framework and proofing the finite-time convergence.\", \"update\": \"score raised\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important problem of (safe) BO, which is the limitation to low-dimensional problems.\", \"Using additive kernel functions in a safe BO setting to increase its performance in high dimensional problems seems, to the best of my knowledge, a novel idea. The results indicate that this approach can be superior to \\\"standard\\\" safe BO approaches for some systems.\", \"To the best of my knowledge, the math seems to be sound and the proofs are correct.\"], \"weaknesses\": \"1. The main motivation for introducing the new kernel function is its superior in high dimensions. However, all evaluations in the paper are still pretty low-dimensional, with dimensions up to 10. In fact, in the related work it is mentioned that existing safe BO approaches work with systems where \\\"three controller [...] each controller having only two parameters\\\". However, the proposed algorithm is experimentally evaluated on the exact same numbers of parameters.\\n\\n2. I agree that controller tuning including many parameters can be tricky. However, cascaded controllers are a bad example of that. In practice, these structures can be tuned very efficiently as you start with the inner loop until a specific performance is reached, then the next loop, and so on. All parameters are very meaningful, and the process is quite transparent. Furthermore, \\nmotor controllers are a bad example for safe BO. As long as no load is attached to the motor (something you don't do for tuning the controllers), it is almost impossible to destroy the system. Typical amplifiers do have a \\\"max current\\\" setting, or it is simply defined in the software. Therefore, I cannot understand the safety concerns that the authors mentioned for the experiment.\", \"long_story_short\": \"Although the adapted safe BO method sounds interesting, the motivation with cascade controllers and the experimental evaluation are not a good choices.\", \"questions\": \"1: Can you include an experiment with a truly high-dimensional system?\", \"2\": \"As mentioned in the weakness section, I think cascade controllers are not a good example. I suggest something like distributed controllers in large water/power networks. Could you test the algorithms on more complex systems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for responding to my questions. The updated Lemma (Lemma D.2) is seemingly incorrect. The authors state \\u2202kt(x)/\\u2202di\\n =\\u2212d_i/ l^2* kt(x), which is incorrect, unless I misunderstood the definition of k_t(d_i). In fact, \\u2202kt(x)/\\u2202di = \\u2212d_i/ l^2* (0, 0, 0, ... k(x, x_i) , ..., 0 ,0 ). Though this potentially does not affect, the final result, I suggest the authors carefully introduce the notation employed here and revise the result accordingly to avoid confusion.\"}", "{\"comment\": \"### **Comment 8**\\n\\n> \\u201cIn the proof of Lemma C.1, the step from line 878 to 880 is incomplete.\\u201d\\n\\n**Response:** \\nWe appreciate this insightful question very much, as it helps us to correct one mistake in the proof of Lemma C.1 (Lemma D.2 in the updated PDF). We have modified and improved the proof and please refer to the updated PDF for the revision. \\n\\nThe confusion in the original proof arose from a typographical error, where we mistakenly wrote $ \\\\mathbf{k}_t(x) $ as $ [\\\\mathbf{k}_t(x)]_i $, inadvertently transforming a vector into a scalar. After correcting this mistake, we observe that $ \\\\mathbf{K}_t $ is a symmetric positive definite (PD) matrix, and consequently, $ \\\\mathbf{K}_t^{-1} $ is also PD. This allows us to conclude that $ \\\\mathbf{k}_t(x)^\\\\top \\\\mathbf{K}_t^{-1} \\\\mathbf{k}_t(x) > 0 $. \\n\\nFurthermore, since $ -2 \\\\times -\\\\frac{d_i}{l^2} \\\\geq 0 $, we can deduce the result in line 880: \\n$$\\n\\\\frac{\\\\partial \\\\sigma_t^2(x)}{\\\\partial d_i} \\\\geq 0. \\n$$\\n\\n---\\n\\n### **Comment 9**\\n\\n> \\u201cI am not sure if the statement in line 894 holds. Take, for example, the points $ x_{\\\\text{sb}} = 1 $, $ x_{\\\\text{oes}} = 0 $ and $ x_i = 10 $. For $ \\\\lambda=0 $, we have $ || x(\\\\lambda) - x_i ||_2 = 10 > || x_{\\\\text{sb}} - x_{\\\\text{oes}}||_2 = 1 $, yet the inner product $ (x(\\\\lambda) - x_i ) (x_{\\\\text{sb}}-x_{\\\\text{oes}}) = (0-10 ) (1-0) <0 $ is negative.\\u201d\\n\\n**Response:** \\nIn the case provided by the reviewer, where $ x_{\\\\text{sb}} = 1 $ and $ x_{\\\\text{oes}} = 0 $, any evaluated safe point $ x_i $ must lie within $ [-\\\\infty, 0] $. This is because $ x_{\\\\text{oes}} = 0 $ is already the closest evaluated safe point to $ x_{\\\\text{sb}} $. In this scenario, the outermost region is $ [0, 1] $, while $ [1, \\\\infty] $ represents the current unsafe region. Therefore, $ x_i = 10 $ is not possible under these conditions. \\n\\nAs stated in line 894 of the paper, $ x_{\\\\text{sb}} $ is farther from $ x_i $ than $ x_{\\\\text{oes}} $ is. The positional relationship of these points in the 1D case can be described as:\\n\\n$\\\\text{unsafe region}$ $\\\\cdots$ $x_{\\\\text{sb,left}}$ $\\\\cdots$ $x(\\\\lambda_{\\\\text{left}})$ $\\\\cdots$ $x_{\\\\text{oes,left}}$ $\\\\cdots$ $x_i$ $\\\\cdots$ $x_{\\\\text{oes,right}}$ $\\\\cdots$ $x(\\\\lambda_{\\\\text{right}})$ $\\\\cdots$ $x_{\\\\text{sb,right}}$ $\\\\cdots$ $\\\\text{unsafe region}$.\\n\\nany evaluated safe point $ x_i $ must lie between the two outermost evaluated safe points $x_{\\\\text{oes,left}}$ and $x_{\\\\text{oes,right}}$. This ensures that the sign of $ x(\\\\lambda) - x_i $ is identical to that of $ x_{\\\\text{sb}} - x_{\\\\text{oes}} $ on the same side (either left or right, depends on $ x(\\\\lambda) $), guaranteeing the following inequality holds: \\n\\n$$\\n\\\\left( x(\\\\lambda) - x_i \\\\right)^\\\\top \\\\left( x_{\\\\text{sb}} - x_{\\\\text{oes}} \\\\right) \\\\geq 0.\\n$$\\n\\n---\\n\\nIf any issues or questions remain, please feel free to bring them up. We would be happy to provide further clarifications.\"}" ] }
57NfyYxh5f
How to Probe: Simple Yet Effective Techniques for Improving Post-hoc Explanations
[ "Siddhartha Gairola", "Moritz Böhle", "Francesco Locatello", "Bernt Schiele" ]
Post-hoc importance attribution methods are a popular tool for “explaining” Deep Neural Networks (DNNs) and are inherently based on the assumption that the explanations can be applied independently of how the models were trained. Contrarily, in this work we bring forward empirical evidence that challenges this very notion. Surprisingly, we discover a strong dependency on and demonstrate that the training details of a pre-trained model’s classification layer (<10% of model parameters) play a crucial role, much more than the pre-training scheme itself. This is of high practical relevance: (1) as techniques for pre-training models are becoming increasingly diverse, understanding the interplay between these techniques and attribution methods is critical; (2) it sheds light on an important yet overlooked assumption of post-hoc attribution methods which can drastically impact model explanations and how they are interpreted eventually. With this finding we also present simple yet effective adjustments to the classification layers, that can significantly enhance the quality of model explanations. We validate our findings across several visual pre-training frameworks (fully-supervised, self-supervised, contrastive vision-language training) and analyse how they impact explanations for a wide range of attribution methods on a diverse set of evaluation metrics.
[ "Interpretability", "Explainable AI", "Representation Learning", "Pre-training" ]
Accept (Poster)
https://openreview.net/pdf?id=57NfyYxh5f
https://openreview.net/forum?id=57NfyYxh5f
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xEjpQlnGLg", "vYk3nlZaMx", "qxQk8EVqCt", "qmzmPVMz8b", "lS34BQ1MA7", "iMDyzhi7DY", "g7rv1zv427", "eW2NEuMyHl", "bVZKtD1OeE", "bLsN7SSiQR", "V13lFDv0pC", "UjnJIMM8y9", "TrwFnlPWGn", "JByuM62GuV", "GsALJQUTZS", "GXCHTX7tx5", "D3n6dTt028", "Aw4iZ0RZC8", "8tjLQTeyuj", "7f6rot27Br", "7ccDlFPDXa", "68Czdedu9S" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732524943812, 1733120432561, 1737523674142, 1732699643167, 1732887835828, 1732530568222, 1732623469485, 1732530324355, 1732584380508, 1730520584402, 1732532337452, 1732604253311, 1730235762441, 1730357390748, 1730232844953, 1732635962183, 1732525490912, 1734624493266, 1732536073073, 1732635135683, 1732528576777, 1732528987592 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_VgDa" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_nDZf" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_nDZf" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_VgDa" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_7mwF" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_Awie" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Area_Chair_JoQP" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Reviewer_Awie" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ], [ "ICLR.cc/2025/Conference/Submission4964/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author\\u2019s Response to Common Concern Regarding Limited Model Diversity\", \"comment\": \"In the following, we address the common concern that was shared by Reviewers (nDZf, 7mwF) regarding limited model diversity.\\n\\n__We updated our submission with results for ViTs, which fully corroborate our previous findings; we thank the reviewers for this excellent suggestion which we believe to further strengthen our submission.__\\n\\nIn particular, following the reviewer\\u2019s suggestion, we evaluated on both the conventional ViTs interpretability using CGW1 [1], ViT-cx (CasualX) [2], Rollout [3] and the inherently-interpretable B-cos ViTs [4]. To summarize we observe the following:\\n\\n* The interpretability scores for BCE-trained classifiers consistently outperform CE-trained classifiers across all pre-trainings (supervised, MoCo, DINO) and explanation methods (see Table 1,2 below).\\n* The classification accuracy on ImageNet for models trained with BCE loss matches the accuracy of models trained with CE loss (See Table 1,2 below).\\n* When probing the ViT backbones with Bcos MLPs, we find improvements in both interpretability scores and accuracy (see Table 3).\\n\\n__Table 1:Accuracy and GridPG Localization Scores for Conventional ViTs on [1,2,3]__\\n| | | ACC % | | | Localization% | |\\n|---------------|--------------|:-----:|:----:|:-------------:|:-------------:|:-------------:|\\n| Backbone | Pre-training | CE | BCE | $\\\\Delta_{bce-ce}^{CGW1}$ [1] | $\\\\Delta_{bce-ce}^{Rollout}$ [2] | $\\\\Delta_{bce-ce}^{CausalX}$ [3] |\\n| ViT-B/16 | Sup | 73.2 | 72.8 | **+15.7** | **+0.4** | **+0.7** |\\n| ViT-B/16 | DINO | 77.2 | 77.6 | **+11.7** | **+4.8** | **+5.3** |\\n| ViT-B/16 | MoCov3 | 76.3 | 75.1 | **+4.4** | **+3.3** | **+3.8** |\\n\\n__Table2: Accuracy and GridPG Localization Scores for Bcos ViTs [4]__\\n| Backbone | Pre-training | Acc$_{CE}$% | Acc$_{BCE}$ | $\\\\Delta_{bce-ce}^{Bcos}$ [4] |\\n|:--------------:|:------------:|:-----:|:----:|:----------:|\\n| B-ViT$_c$-B/16 | Sup | 77.3 | 78.1 | **+23.5** |\\n| B-ViT$_c$-S/16 | DINO | 73.2 | 73.4 | **+32.7** |\\n| B-ViT$_c$-B/16 | DINO | 77.1 | 77.3 | **+20.4** |\\n\\n\\n__Table 3: Bcos MLP vs Linear Probing\\u2014Accuracy and GridPG Localization Scores on B-ViTs [4]__\\n| Backbone | Classifier | Acc% | $\\\\Delta_{mlp-lp}^{acc}$ | Loc.% | $\\\\Delta_{mlp-lp}^{loc}$ |\\n|--------------|--------------|------|-------------------------|-------|-------------------------|\\n| B-ViT$_c$-S/16 | Linear Probe | 73.4 | -- | 79.9 | -- |\\n| B-ViT$_c$-S/16 | Bcos-MLP-2 | 74.2 | **+0.8** | 80.4 | **+0.5**|\\n| B-ViT$_c$-S/16 | Bcos-MLP-3 | 74.7 | **+1.3** | 82.4 | **+2.5** |\\n| B-ViT$_c$-B/16 | Linear Probe | 77.3 | -- | 80.3 | -- |\\n| B-ViT$_c$-B/16 | Bcos-MLP-2 | 78.2 | **+0.9** | 82.1 | **+1.8** |\\n| B-ViT$_c$-B/16 | Bcos-MLP-3 | 79.7 | **+2.4** | 83.4 | **+3.1** |\\n\\nWe have updated the results in the revised submission under the Appendix in Section C (BCE vs. CE Probing \\u2014 Table C1, C2: accuracy and localization scores; MLP Probing\\u2014Table C3) and referred to this in Section 5 of the main paper.\\n \\n__In short, the new results complement our previous findings and further highlight the general applicability of the findings to diverse architectures and backbones.__\\n\\nFor individual queries please refer to the respective rebuttal sections.\", \"references\": \"1. H. Chefer, S. Gur, and L. Wolf. Transformer interpretability beyond attention visualization. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 782\\u2013791, 2020.\\n2. S. Abnar and W. Zuidema. Quantifying attention flow in transformers. In Annual Meeting of the Association for Computational Linguistics, 2020.\\n3. W. Xie, X. hui Li, C. C. Cao, and N. L. Zhang. Vit-cx: Causal explanation of vision transformers. In International Joint Conference on Artificial Intelligence, 2022.\\n4. M. Boehle, N. Singh, M. Fritz, and B. Schiele. B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers, 2023.\"}", "{\"title\": \"Follow-up to Reviewer 7mwF Regarding Rebuttal Feedback\", \"comment\": \"Dear Reviewer 7mwf,\\n\\nWe deeply appreciate the constructive feedback, as well as the time and effort invested in evaluating our work. As we approach the conclusion of the discussion phase, we kindly ask if our response and additional experiments have addressed all concerns, and might prompt a re-assessment of the current score.\\n\\nWe'd be happy to provide more clarifications if required.\\n\\nThank you once again for your time and consideration.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author's Response to Reviewer VgDa's Update\", \"comment\": \"We are happy to have resolved the reviewer's concerns and appreciate the positive feedback.\", \"regarding_the_remaining_comment_on_mlp_probes\": \"in general, it should be expected that more complex probes yield higher accuracy due to their greater modeling capacity, see e.g. [1]. However, to the best of our knowledge, a similarly strong link to improvements in visual explanation quality as demonstrated in our results has not been reported before\\u2014i.e., the fact that it is possible to simultaneously improve __both the__ _explanations __and__ accuracy_.\\n\\nWe believe this to be of high interest to the XAI community in particular as it highlights that accuracy and interpretability need not be at odds, which is a commonly held notion. In fact, our results suggest that particularly for inherently interpretable models, higher accuracy might indeed _improve explanation quality_, as the explanations are a reflection of the model's internal 'decision process' [2].\\n\\nWe again thank the reviewer for their valuable feedback and insightful questions, and are pleased with their positive outlook on our work.\\n\\n_References_\\n1. J. Hewitt, & P. Liang. Designing and Interpreting Probes with Control Tasks. ACL, 2019.\\n2. M Boehle, M. Fritz, and B. Schiele. B-cos networks: Alignment is all we need for interpretability. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.\"}", "{\"title\": \"Author's Follow-up To Reviewer 7mwF - we kindly ask if the rebuttal has addressed all concerns\", \"comment\": \"Dear Reviewer 7mwF,\\n\\nWe are grateful for the constructive feedback and thoughtful consideration. As the discussion period concludes soon, we kindly ask if our response and additional experiments have addressed all concerns. \\n\\nWe\\u2019d be happy to provide more clarifications if needed.\\n\\nMany thanks!\"}", "{\"title\": \"Author's Response to Reviewer VgDa (Part 2 of 2)\", \"comment\": \"_Response to Questions_\\n\\n_1. \\u201cI am still confused as to why CE produces worse attribution than BCE. Could the authors explain this again?\\u201d:_ __We posit that BCE-trained probes yield more localized and class-specific attributions due to their non-shift-invariant optimization.__\\n\\nThis is one of the core findings in our work which we explain in more detail below.\\n\\nWhile the BCE- and the CE-based probes rely on the same frozen backbones, BCE-trained models learn a combination of more-class discriminative features as compared to CE-trained models, even for simple linear classifiers (probes).\\n\\nWe hypothesize (in Section 3.2) that the shift-invariance of the CE loss contributes to its poorer attribution quality. Specifically, the Softmax function within CE is invariant to adding a constant shift to all logits. This results in multiple (infinite) linear classifiers that are equivalent under the CE-based optimization. As most attribution methods, in some form or another, rely on the models\\u2019 backbone + classifier weights to compute the importance attributions, it cannot be expected that for CE-trained probes the attributions are calibrated such that \\u2018positive\\u2019 attributions will always be class specific.\\n\\nIn contrast, the __BCE loss is not shift-invariant.__ It penalizes constant positive shifts to non-target classes, thereby biasing the model toward `class-specific\\u2019 features. As a result, __BCE-trained probes produce better-calibrated explanations, with higher class-specificity and improved localization of class objects__ (as is demonstrated by our results both qualitatively and quantitative in Sec 5.1).\\n\\nWe will include a more detailed explanation in the final version of the paper for increased clarity.\\n\\n\\n_2. \\u201cAlso, why is it that the last output layer is so important? Why is the rest of the model have such little importance?\\u201d_: __That is one of our core findings!__\\n\\n\\nFoundational vision models, pre-trained on large datasets with self-supervised or vision-language objectives, learn general features that transfer well to downstream tasks. \\nProbing these pre-trained models is a commonly used approach for downstream tasks. Understanding how to obtain good explanations for the combined models is thus highly relevant in practice.\\n\\nIn our work we make this very surprising yet important finding, that the training details of a pre-trained model\\u2019s classification layer (<10% of model parameters) play a crucial role, much more than the pre-training scheme itself. We study this empirically and find that even for simple linear probes, the localisation ability of the explanations highly depends on how the probes are trained\\u2014BCE-trained probes produce better localized explanations as compared to CE-trained probes (see Section 5.1 in main paper).\\n\\nThis is interesting as different probes (CE / BCE) compute their outputs by nothing but a weighted mean of the frozen backbone features. As such, the explanations are largely dominated by the backbone features. Interestingly, BCE induces a combination of features that leads to more localizing explanations (which we now __also show for Vision Transformers!__).\\n\\nFurther, we note that features from pre-trained backbones might not be linearly separable with respect to downstream classes, thus we also propose a simple tweak, by using B-cos MLP probes leads to improvements in both downstream performance and explanation quality (see Section 5.2 in main paper).\\n\\nThese findings highlight a critical yet overlooked aspect of explainable AI (XI): __the interplay between model training and post-hoc explanations.__ We believe this has further potential for future research and is important for the community to build upon.\\n\\nWe once again thank the reviewer for appreciating our efforts on clear writing and sharing our enthusiasm for the interesting and impactful findings. \\n\\nWe hope our response adequately addresses the reviewer\\u2019s concerns and queries, and we would be grateful if the reviewer considered updating their score.\"}", "{\"title\": \"Clarifications\", \"comment\": \"The authors have responded to my questions in a satisfactory manner. However, it is kind of puzzling that this approach improves the accuracy and the explainability. If method improves that accuracy, why even position the contributions as an XAI paper? Anyways, the paper is good enough given the performed revision.\"}", "{\"title\": \"Author's Response to Reviewer VgDa (Part 1 of 2)\", \"comment\": \"Thank you for your review and feedback. We appreciate your assessment of our work as __\\u201can interesting read\\u201d__ with __\\u201cclear writing\\u201d__ and __\\u201cimpactful experiments\\u201d__. We provide a pointwise response to individual concerns below.\\n\\n_Discussion of why there is an increase in accuracy and attribution quality with more complex output layers._: __We find Bcos-MLPs improve accuracy and attribution quality by better disentangling class-specific information from pre-trained features, which may not be linearly separable.__\\n\\nWe thank the reviewer for their question! We aim to clarify it below: \\n\\nThe features computed by self-supervised or vision-language backbones are not optimized to be linearly separable with respect to the classes for downstream tasks, limiting the utility of linear probes. The larger modeling capacity of B-cos MLPs allows them to extract information in a non-linear manner. This can lead to improvements in accuracy\\u2014interestingly, we find that this also significantly improves the \\u2018class-specificity\\u2019 and thus the localisation performance of explanation methods, more so than conventional MLPs (see Table 1 below).\\n\\nWe attribute this to the alignment pressure introduced by B-cos layers [1] that helps distill object-class relevant features and rely less on background context.\\n\\n__Table 1: Bcos MLP vs Std MLP: Change in Accuracy and GridPG Scores on ImageNet for Dino ResNet50 model.__\\n| | Bcos MLPs | | Std Mlps | |\\n|------------|-----------|-----------|-----------|-----------|\\n| XAI Method | $\\\\Delta^{acc}_{mlp-lp}$| $\\\\Delta^{loc}_{mlp-lp}$ |$\\\\Delta^{acc}_{mlp-lp}$ | $\\\\Delta^{loc}_{mlp-lp}$ |\\n| LIME | +2.9 | **+38** | +2.5 | **+32** |\\n| IntGrad | +2.9 | **+22** | +2.5 | -6 |\\n| GradCAM | +2.9 | **+8** | +2.5 | -8 |\\n\\nWe discuss this in the main paper, specifically in Sec 3.3 and find it validated empirically in our results in Sec 5.2. The findings do in fact suggest that the model with MLP probes relies on more specific \\u2018class-relevant\\u2019 features since it has the capacity to discover such features.\\n\\nWe would be happy to further add an extended discussion about this in more detail to the paper, and how to interpret it better. This is very interesting and important for people to understand in the research community.\\n\\n_\\u201cI assume proper train, test, and validation sets have been used?\\u201d_: __Yes, we adhere to well-established standard dataset splits for all our experiments.__\\n\\nSpecifically, train/minival/val splits for ImageNet [2], the standard train/val/test split for PascalVOC [3] and train/val for MS-COCO [4]. Details on dataset splits and implementation details are already included in the appendix (please see Appendix Section F]).\\n\\n_References:_\\n\\n1. M Boehle, M. Fritz, and B. Schiele. B-cos networks: Alignment is all we need for interpretability. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10319\\u201310328, 2022.\\n2. Tan, M., Le, Q.. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, (2019).\\n3. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html. \\n4. https://cocodataset.org/\"}", "{\"comment\": \"Thanks for the authors' responses, which have well addressed my concerns. I therefore raised my score to 6.\\n\\nFor the revised version of the paper, I would recommend that the authors more prominently include the results related to ViTs in the main text. This is crucial to demonstrate that the findings described in the paper are not confined to a single model, such as ResNet50.\"}", "{\"summary\": \"The paper challenges the tradition notation that model explanations are independent of training methods by demonstrating that the quality of attributions for pre-trained models depends significantly on how the classification head is trained. It shows that using binary cross-entropy (BCE) loss instead of conventional cross-entropy (CE) loss leads to marked improvements in interpretability metrics across several visual pre-training frameworks. Furthermore, it is found that the non-linear B-cos MLP probes boost the class-specific localization ability of attribution methods.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1.\\tClarity and Organization: The paper is exceptionally well-written and structured, enhancing readability and accessibility of the key finding\\n2.\\tThe study reveals that training probes using binary cross-entropy (BCE) loss instead of the traditional cross-entropy (CE) loss consistently enhances interpretability metrics. The analysis of the Softmax Shift-Invariance Issue in interesting and insightful. This could have substantial implications for various DNN-based applications.\\n3.\\tThe improvements in interpretability metrics are shown to be consistent across various training methods for the visual encoder. The robustness of these findings was thoroughly validated using diverse learning paradigms, including supervised, self-supervised and CLIP.\", \"weaknesses\": \"1.\\t(Major) Limited Model Diversity: The research exclusively utilizes the ResNet50 model backbone, which canot adequately represent the behavior across various architectures. Testing additional backbones, especially Vision Transformers (ViTs), and incorporating explanation methods tailored for these models (referenced as [1][2][3]), would provide a more robust validation of the findings.\\n2.\\tInclusion of Additional Methods: The paper could be strengthened by including more population perturbation-based methods, such as RISE [4] and Score-CAM [5], to further substantiate the interpretability improvements.\\n3.\\tSelection of Examples: Concerns arise regarding whether the examples shown in Figures 1 and 6 are cherry-picked, especially since the GridPG Score in Figure 5 suggests that the BCE model does not always perform perfectly. Including a broader range of examples, particularly where the BCE model scores lower on the GridPG, would offer a more comprehensive understanding and enhance the paper's credibility. \\n\\nI would be happy to improve my rating of the paper if these issues are addressed thoroughly.\\n\\n[1] Transformer interpretability beyond attention visualization. \\n\\n[2] Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers.\\n\\n[3] Vit-cx: Causal explanation of vision transformers.\\n\\n[4] RISE: Randomized Input Sampling for Explanation of Black-box Models.\\n\\n[5] Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's Response to Reviewer Awie\", \"comment\": \"Thank you for your review and feedback. We appreciate your assessment of our work as __\\u201cvery well written\\u201d__ with __\\u201cclear findings\\u201d__ and __\\u201cimpactful for future considerations of interpretable model design, post-hoc explainability, and improving model interpretation.\\u201d__\\n\\nWe provide a pointwise response to individual concerns and queries below.\\n\\n_1. \\u201cMinor Spelling Mistakes\\u201d:_ __We have updated the manuscript with fixed spelling mistakes.__\\n\\nWe thank the reviewer for their kind suggestion. We have taken a pass over the entire manuscript and fixed any spelling mistakes we could find. This has helped make our writing more clear and consistent.\", \"specific_fixes_below\": \">L99: \\u201cexplanations\\u201d -> \\u201cexplanation methods\\u201d,\", \"l130\": \"\\u201cRecent work have studied\\u201d -> \\u201cRecent work has studied\\u201d,\", \"l142\": \"\\u201cmethod\\u201d -> \\u201cmethods\\u201d,\", \"l172\": \"\\u201casses\\u201d -> \\u201cassess\\u201d,\", \"l305\": \"\\u201cc is computes\\u201d -> \\u201cc computes\\u201d,\", \"l358\": \"\\u201crepsect\\u201d -> \\u201crespect\\u201d,\", \"l428\": \"\\u201clocalizataion\\u201d -> \\u201clocalization\\u201d,\", \"l541\": \"\\u201cqualitiative\\u201d -> \\u201cqualitative\\u201d.\\n\\n_Response to Questions_\\n\\n_1. Would the authors suggest the development of future classification models take into consideration the information in this paper?_: __Our work demonstrates that the quality of outputs of explanation methods are closely tied to the downstream classification objective.__\\n\\nThese findings are not common knowledge within the community and highlight a critical yet overlooked aspect of explainable AI (XI): __the interplay between model training and post-hoc explanations.__ While this is important for classification models themselves, it is even more critical for improving the alignment between attribution methods and classification objectives.\\n\\nWe strongly believe this has further potential for future research and is important for the community to build on.\\n\\n_2. Do the authors think that a training loss could be created to further improve explainability as the minor differences in CE and BCE have a significant effect?_: __Yes, this is an excellent suggestion for future research, and our findings strongly support this direction!__\\n\\nOur findings and results suggest that the quality of explanations generated by popular attribution methods can be significantly influenced by the training objective of the classification layers.\\n\\nThis naturally suggests that developing new loss functions specifically to enhance explainability is a promising direction for future research. For instance prior work has shown that \\n* Incorporating sparsity constraints [1] or attribution priors [2] directly into the loss function or adding model guidance [3] during training could improve attribution quality, though such techniques might introduce additional computational costs.\\n* Even a simple sparsity loss applied to the final classification layer has been shown to enhance attribution properties in some cases. This aligns well with our findings, which emphasize the importance of seemingly minor differences in training objectives (e.g., CE vs. BCE).\\n\\nThat being said, one of the core takeaways from __our work is that it challenges a common belief within the explainable AI (xAI) community, that post-hoc explanation methods often assume no influence from model training.__ Our systematic experiments demonstrate that even subtle changes in training objectives can have drastic implications for explainability, highlighting a critical area for future exploration.\\n\\nWe once again thank the reviewer for their positive and encouraging remarks about our submission.\\n\\nWe hope our response adequately addresses the reviewer\\u2019s concerns and provides clarifications to their questions.\\n\\n_References_\\n1. H. Cunningham, A. Ewart, L R. Smith, R. Huben and L. Sharkey. \\u201cSparse Autoencoders Find Highly Interpretable Features in Language Models.\\u201d ICLR, 2024.\\n2. E. Weinberger,, J. D. Janizek and S. Lee. \\u201cLearning Deep Attribution Priors Based On Prior Knowledge.\\u201d NeurIPS, 2019.\\n3. F. Friedrich, W. Stammer, P. Schramowski, and K. Kersting. A Typology to Explore and Guide Explanatory Interactive Machine Learning. arXiv preprint arXiv:2203.03668, 2022.\"}", "{\"title\": \"Author's Response to Reviewer nDZf's Update\", \"comment\": \"We thank the reviewer for their prompt reply to our response. We are pleased to find that the reviewer finds all their concerns well addressed.\\n\\nThanks for the recommendation, yes we will thoroughly integrate the ViT results into the final revised version.\"}", "{\"summary\": \"The main motivation behind this work is that two models using the same training regime and ending at the same loss can produce two extremely different attributions for the same image. The authors demonstrate that the training paradigm for the final classification layer of a network is the most important decider in generating more precise attributions, regardless of the attribution method. They specifically show that a binary cross entropy trained output layer produces better attributions than a cross entropy trained output layer. The increase in attribution quality does typically come at the cost of <10% accuracy reduction when using a linear layer, but the accuracy can be improved by using a more complex output layer.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Overall, it is an interesting read and demonstrates some interesting results. The writing is generally clear, the issue is well defined, and the experiments are impactful. It is difficult for me to say exactly what the authors did well, other than that it is a good read.\\n \\n1. In-depth motivation section, outlining the issues around generating consistently clear attributions\\n2. Plenty of qualitative results\\n3. Experiments over a variety of pre-trained models and datasets\\n4. The authors clearly show that this is an attribution-invariant issue.\", \"weaknesses\": \"There isn't any discussion of why there is an increase in accuracy and attribution quality with more complex output layers. Is it as simple as the layers being larger, or is there another reason? I assume proper train, test, and validation sets have been used?\", \"questions\": \"1. I am still confused as to why CE produces worse attribution than BCE. Could the authors explain this again?\\n 2. Also, why is it that the last output layer is so important? Why is the rest of the model have such little importance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discovers and demonstrates the strong dependence of post-hoc importance attribution methods on the training details of the classification layer of the pre-trained model. Based on this findings, the paper also proposes a simple but effective adjustment to the classification layer to significantly improve the quality of model explanations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper reveals and demonstrates the strong dependence of post-hoc importance attribution methods on the training details of the classification layer in pre-trained models.\", \"weaknesses\": \"1. The experimental method is limited to ResNet50, and the results are not extensive enough. Thus, experimental results are not convincing enough to verify the effectiveness of their methods.\\n\\n2. The contribution of this article is not enough. The author discovered the impact of training details on post-processing methods, but the evaluation metrics used and the subsequent B-cos model are not the author's innovation.\\n\\n3. [minor] Figures in this paper have obvious flaws. It will be better that authors carefully revise their figures.\", \"questions\": \"1. In the case of backbone freezing, increasing classifier parameters can improve performance. Is the design of B-cos MLP necessary? Is MLP not possible?\\n2. Can you provide more loss function results to verify Softmax Shift-Invariance? How about the cross-entropy loss?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors find and support an interesting observation that the method of training the classifier layer of a model has a significant impact on the results of post-hoc attribution methods. Because many post-hoc attribution methods assume that model training does not have an impact, they find that this must be reconsidered, and in fact, simply modifying the method of training the last linear layer(s) can improve model accuracy and explainability.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper is very well written and planned. Not only are the approaches and findings very clear but the authors provide extensive support of their findings over numerous models, datasets, attribution methods, and metrics.\\n\\nThe choice to study multiple pre-training approaches adds significant strength to their arguments and findings. \\n\\nThe overall findings are simple, but impactful for future considerations of interpretable model design, post-hoc explainability, and improving model interpretation.\", \"weaknesses\": \"There are not significant weaknesses to address. There are minor spelling mistakes, but it does not hurt the delivery of the information.\", \"questions\": \"Would the authors suggest the development of future classification models take into consideration the information in this paper?\\n\\nDo the authors think that a training loss could be created to further improve explainability as the minor differences in CE and BCE have a significant effect?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's Response to Reviewer Awie's Update\", \"comment\": \"We thank the reviewer for their prompt reply to our response. We are pleased to find that the reviewer finds all their concerns well addressed, and are happy to retain their positive rating about our work.\\n\\nThanks for the recommendation, yes we will thoroughly integrate the ViT results into the final revised version.\"}", "{\"title\": \"Author's response to Reviewer nDZf\", \"comment\": \"Thank you for your review and feedback. We appreciate your assessment of our work as __\\u201cinteresting and insightful\\u201d__ and __\\u201cexceptionally well-written and structured\\u201d__. We provide a pointwise response to individual concerns below.\\n\\n_1.\\u201cLimited Model Diversity\\u201d_: __We have expanded our experiments to include ViTs and incorporated these results into the manuscript.__\", \"we_summarise_the_key_observations_on_vits_below\": [\"We evaluated both conventional ViTs (using CGW1 [1], ViT-cx [2], and Rollout [3]) and inherently interpretable B-cos ViTs [4], and the updated experiments fully corroborate our previous findings. In particular we find:\", \"BCE-trained classifiers to outperform CE-trained classifiers in interpretability scores across all pre-trainings and explanation methods.\", \"BCE-trained models achieve similar accuracy as CE-trained models.\", \"Probing ViT backbones with B-cos MLPs yields improvements in both interpretability and accuracy.\"], \"for_a_more_detailed_clarification_please_refer_to_the_common_response_here\": \"https://openreview.net/forum?id=57NfyYxh5f&noteId=xEjpQlnGLg\\n\\n\\n_2. \\u201cInclusion of Additional Perturbation-based explanation methods\\u201d_: __We updated our submission additionally with results for ScoreCAM [6], which are very well consistent with our previous findings.__\\n\\n* Specifically, the interpretability scores with ScoreCAM [5] explanations for BCE-trained classifiers significantly outperform CE-trained classifiers across all pre-trainings (supervised, MoCo, BYOL, DINO) on both conventional and Bcos backbones (see following Table 1):\\n\\n__Table 1: BCE vs CE Probing\\u2014Localization Scores on ImageNet for conventional and Bcos backbones with different pre-trainings.__\\n| Backbone | Pre-training | Loc%$_{CE}$ | Loc%$_{BCE}$ | $\\\\Delta_{bce-ce}^{loc}$ |\\n|:--------:|:-----------:|:----:|:----:|:-------------:|\\n| RN50 | MoCov2 | 52.6 | 54.4 | **+1.8** |\\n| RN50 | BYOL | 38.1 | 40.5 | **+2.4** |\\n| RN50 | DINO | 44.2 | 55.9 | **+11.7** |\\n| BRN50 | MoCov2 | 30.2 | 43.2 | **+13.0** |\\n| BRN50 | BYOL | 45.3 | 50.2 | **+4.9** |\\n| BRN50 | DINO | 51.0 | 67.7 | **+16.7** |\\n\\nWe have updated the results in the manuscript under the Appendix in Section E1 and added Table E3, and also referred to this in Section 5 of the main paper.\\n\\n_3. \\u201cConcerns regarding the selection of examples\\u201d_: __We have updated our submission to address the reviewer's concern regarding the selection of visual examples by adding many more qualitative examples to better reflect the range of different localization scores.__\\n\\nTo ensure a balanced and comprehensive view of our findings, we have expanded the qualitative analysis to include a diverse set of randomly sampled examples. These examples, now detailed in Section D.4 of the Appendix, are presented in Figures D10-13, and include a mix of high and low GridPG scores for BCE vs. CE probing and MLP probing. Each example is annotated with its localization score to provide clearer context and further transparency.\\n\\nWe once again thank the reviewer, firstly for appreciating our efforts and sharing our enthusiasm for the interesting and insightful findings, and secondly for providing great feedback that makes our submission stronger!\\n\\nWe hope our revisions and expanded analysis adequately addresses all concerns, and we would be grateful if the reviewer considered updating their score.\\n\\n_References:_\\n1. H. Chefer, S. Gur, and L. Wolf. Transformer interpretability beyond attention visualization. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 782\\u2013791, 2020.\\n2. W. Xie, X. H. Li, G. C. Cao, and N. L. Zhang. Vit-cx: Causal explanation of vision transformers. In International Joint Conference on Artificial Intelligence, 2022.\\n3. S. Abnar and W. Zuidema. Quantifying attention flow in transformers. In Annual Meeting of the Association for Computational Linguistics, 2020.\\n4. M. Boehle, N. Singh, M. Fritz, and B. Schiele. B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers, 2023.\\n5. H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu. Score-cam: Score-weighted visual explanations for convolutional neural networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 111\\u2013119, 2020.\"}", "{\"metareview\": \"This paper reveals that post-hoc importance attribution methods are strongly influenced by the training details of the classification layer in a pre-trained model. Building on these findings, the authors propose a simple yet effective adjustment to the classification layer, resulting in significantly improved model explanations. Considering the paper's novelty and the generally positive feedback from reviewers, I am inclined to accept this submission.\", \"additional_comments_on_reviewer_discussion\": \"1. **Limited Model Diversity (nDZf, 7mwF):**\\n The authors have included results for Vision Transformers (ViTs), which fully corroborate their previous findings. This addition, suggested by the reviewers, further strengthens the submission.\\n\\n2. **Inclusion of Additional Perturbation-based Explanation Methods (nDZf):** \\n The authors have added results for ScoreCAM, which remain consistent with their earlier observations. This inclusion offers a more comprehensive assessment of explanation methods.\\n\\n3. **Concerns Regarding the Selection of Examples (nDZf):** \\n The authors have expanded the range of qualitative examples to better represent various localization scores, thus addressing the reviewer\\u2019s concerns about the selection of visual examples.\\n\\nImportantly, as noted by the reviewers (nDZf, VgDa, Awie), these findings highlight a crucial new aspect of explainable artificial intelligence (XAI) that should be considered when employing existing attribution methods or developing new ones in the future.\\n\\nThe authors hope that these revisions and the additional analysis adequately address all reviewer concerns. They remain available to provide further clarifications if needed.\"}", "{\"title\": \"Author's Summary of Rebuttal Discussion\", \"comment\": \"We would like to thank all the reviewers for their constructive feedback and are pleased that they find our paper to be __\\u201cexceptionally well-written and structured\\\"__ (nDZf, VgDa, Awie), our experiments to be __\\u201cwell defined, planned, [as well as] thoroughly validated\\u201d__, showing __\\u201cimpactful, interesting and insightful results\\\"__ (nDZf, VgDa, Awie) that can have __\\u201csubstantial implications for various DNN-based application(s)\\u201d__ (nDZf).\", \"this_is_a_summary_of_main_concerns_and_how_we_addressed_them\": \"1._\\\"Limited Model Diversity\\\"_ (nDZf, 7mwF): __We updated our submission with results for ViTs, which fully corroborate our previous findings; we thank the reviewers for this excellent suggestion which we believe to further strengthen our submission.__ \\n(refer to: https://openreview.net/forum?id=57NfyYxh5f&noteId=xEjpQlnGLg)\\n\\n2._\\u201cInclusion of Additional Perturbation-based explanation methods\\u201d (nDZf)_: __We updated our submission additionally with results for ScoreCAM which are very well consistent with our previous findings.__ (refer to: https://openreview.net/forum?id=57NfyYxh5f&noteId=D3n6dTt028)\\n\\n3._\\u201cConcerns regarding the selection of examples\\u201d (nDZf)_: __We have updated our submission to address the reviewer's concern regarding the selection of visual examples by adding many more qualitative examples to better reflect the range of different localization scores.__ (refer to: https://openreview.net/forum?id=57NfyYxh5f&noteId=D3n6dTt028)\\n\\nImportantly, as the reviewers agree (nDZf, VgDa, Awie), __our findings uncover a new crucial aspect of explainable artificial intelligence (XAI)__ that should be considered when using existing attribution methods or developing new ones in the future. \\n\\nWe hope our revisions and expanded analysis adequately addresses all concerns from the reviewers, and we would be happy to provide any further clarifications.\\n\\n_Note_\\n* _All additions to the content in the revised submission have been made in green text, to clearly highlight the changes._\\n* _Please refresh your browser tab in case the tables / math does not render properly on OpenReview._\\n\\n__Updates__\\n\\n* Reviewer (nDZf) have already taken the time to assess our changes and response. They have increased their rating to a positive score, and find our responses to have \\\"well addressed their concerns\\\".\\n* Reviewer (Awie) have also acknowledged their satisfaction with all our responses, and have retained their positive score.\\n* Reviewer (VgDa) are satisfied with our response to their questions, and have retained their positive score.\"}", "{\"comment\": \"I am pleased by the effort of the authors in responding to both my and the other reviewers' feedback. I am happy to leave my score as is. I would additionally agree with reviewer nDZf in suggesting the final version of this paper focuses on ViT over resnet experiments due to the increased popularity and usage of these models.\"}", "{\"title\": \"Author's Response to Reviewer 7mwF (Part 1 of 2)\", \"comment\": \"Thank you for your efforts in reviewing our paper and for your feedback. We provide a pointwise response to individual concerns below.\\n\\n***\\n\\n_1. \\u201cLimited experimental method to ResNets\\u201d_: __We have expanded our experiments to include additional architectures (ViTs), attribution methods, and many more qualitative results.__\", \"we_summarise_the_key_observations_on_vits_below\": [\"We evaluated both conventional ViTs (using CGW1 [1], ViT-cx [2], and Rollout [3]) and inherently interpretable B-cos ViTs [4], and the __updated experiments fully corroborate our previous findings__. In particular we find:\", \"BCE-trained classifiers to outperform CE-trained classifiers in interpretability scores across all pre-trainings and explanation methods.\", \"BCE-trained models achieve similar accuracy as CE-trained models.\", \"Probing ViT backbones with B-cos MLPs yields improvements in both interpretability and accuracy.\"], \"for_a_more_detailed_clarification_please_refer_to_the_common_response_here\": \"https://openreview.net/forum?id=57NfyYxh5f&noteId=xEjpQlnGLg\\n\\n***\\n\\n_2. \\u201cThe contribution of this article is not enough \\u2026 the evaluation metrics used and the subsequent B-cos model are not the author's innovation.\\u201d_: __We clarify our contributions as well as highlight our novel findings and their implications for XAI research.__\\n\\nWe address the concerns of the reviewer regarding our contributions below-\\n\\n__Clarification regarding the contributions__\\n\\nFirst, we fully agree with the reviewer that the evaluation metrics used and the inherently interpretable B-cos models are not our innovations. They are simply a means to an end, which we use to evaluate our findings across a diverse set-of attribution methods and well-established metrics in XAI literature for a comprehensive experimental setting.\\n\\nPlease note, we do not claim any of these as our innovations and have ensured that is not the case.\", \"we_take_this_opportunity_to_highlight_our___key_contributions_and_findings___more_clearly\": \"__Uncovering a Critical Problem in Attribution Methods__\\nWe show for the first time that model training objectives can significantly affect attribution methods, challenging the assumption that post-hoc attributions are agnostic to model training. In particular, BCE-trained models consistently yield better explanations than CE-trained models (see Section 5.1).\\n\\n__Simple Adjustments Improve Explanations__\\nWe find that replacing the final classification layer with non-linear B-cos MLP probes not only boosts downstream performance across pre-trained backbones but also the \\u2018class-specific\\u2019 localization ability of attribution methods (see Section 5.2).\\n\\n_To the best of our knowledge, __our work is the first to systematically study this interplay between attribution methods, the model\\u2019s training objective and the classifier\\u2019s complexity__._\\n\\n__Holistic Evaluation Setting__\\nWe convincingly demonstrate the generality of our findings by conducting a detailed study that includes five pre-training frameworks, a suite of ten attribution methods, both convolutional- and transformer-based architectures evaluated across multiple popular datasets.\\n\\n__Compatibility of Inherently Interpretable Models with Self-Supervised Frameworks__\\nOur final contribution is showing for the first time that B-Cos networks are compatible with popular self-supervised pre-training paradigms (DINO, MoCo, BYOL) and retain their favorable \\u2018class-specific\\u2019 attributions and faithfulness properties. \\n\\n__Reproducibility__\\nAll code, including training recipes and evaluation scripts are open-sourced for the community (see https://anonymous.4open.science/r/how-to-probe-iclr/).\\n\\nImportantly, our findings uncover a new crucial aspect of explainable artificial intelligence (XAI) that should be considered when using existing attribution methods or developing new ones in the future. \\n\\n***\\n\\n_3. \\u201c[Minor] obvious flaws in figures\\u201d_: __We have carefully reviewed all figures to ensure proper formatting and consistency within the manuscript.__\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the figures. However, as the comment is general, it would be very helpful if the reviewer could please specify the flaws they identified.\\n\\nWe welcome any additional suggestions from the reviewer regarding specific figures that could further improve the paper\\u2019s clarity.\"}", "{\"title\": \"Author's Response to Reviewer 7mwF (Part 2 of 2)\", \"comment\": \"_Response to Questions_\\n\\n***\\n\\n_1. \\u201cIn the case of backbone freezing, increasing classifier parameters can improve performance. Is the design of B-cos MLP necessary? Is MLP not possible?\\u201d_: __We find B-cos MLPs lead to more consistent improvements in accuracy and explanation quality, in contrast to conventional MLPs.__\\n\\nWe thank the reviewer for this important question.\\n\\n__This is one of our core findings!__ The larger modeling capacity of B-cos MLPs allows them to extract information in a non-linear manner. This can lead to improvements in accuracy\\u2014interestingly, we find that this also significantly improves the localisation performance of explanation methods, more so than conventional MLPs.\\n\\n__B-cos MLPs__ lead to more consistent improvements in accuracy and explanation quality, in contrast to conventional MLPs that (in most cases) only lead to increase in accuracy (see Table 1 below). We attribute this to the alignment pressure introduced by B-cos layers [1] that helps distill object-class relevant features and rely less on background context.\\n\\n__Table 1: Bcos MLP vs Std MLP: Change in Accuracy and GridPG Scores on ImageNet for Dino ResNet50 model.__\\n| | Bcos MLPs | | Std Mlps | |\\n|------------|-----------|-----------|-----------|-----------|\\n| XAI Method | $\\\\Delta^{acc}_{mlp-lp}$| $\\\\Delta^{loc}_{mlp-lp}$ |$\\\\Delta^{acc}_{mlp-lp}$ | $\\\\Delta^{loc}_{mlp-lp}$ |\\n| LIME | +2.9 | **+38** | +2.5 | **+32** |\\n| IxG | +2.9 | **+48** | +2.5 | -0.5 |\\n| IntGrad | +2.9 | **+22** | +2.5 | -6 |\\n| GradCAM | +2.9 | **+8** | +2.5 | -8 |\\n\\nWe present these quantitatively in figures __E6, E7__ (for the accuracy vs GridPG score computed on ImageNet) of the Appendix. This is also seen similarly for ViTs (Vision Transformers), please see Table 3. in the common response here: https://openreview.net/forum?id=57NfyYxh5f&noteId=xEjpQlnGLg \\n\\nWe will revise the main paper to clarify these findings and their implications. \\n\\n***\\n\\n_2. \\u201cCan you provide more loss function results to verify Softmax Shift-Invariance? How about the cross-entropy loss?\\u201d_: __We mathematically show that if Softmax is Shift-Invariant, then the Cross-Entropy loss is also Shift-Invariant.__\\n\\nWe thank the reviewer for this question. If we understood the question correctly, the reviewer wants to understand the implications on the Cross-Entropy Loss.\\n\\nBelow, we provide additional details to clarify softmax shift-invariance and its implications for the CE loss.\\n\\n__Softmax-Shift Invariance__\\n\\nLet $\\\\hat y_{c, i}$ denote the probe's output logit for class $c$ and input $i$, and $t_{c, i}$ the respective one-hot encoded label, then\", \"softmax\": \"$\\\\frac{\\\\exp(\\\\hat y_{c,i}+\\\\delta_i)}{\\\\sum_k \\\\exp(\\\\hat y_{k,i}+\\\\delta_i)} = \\\\frac{\\\\exp(\\\\hat y_{c,i})\\\\exp(\\\\delta_i)}{\\\\sum_k \\\\exp(\\\\hat y_{k,i})\\\\exp(\\\\delta_i)}$ (Equation 2 in the main paper).\\n\\n__Cross-Entropy Loss Invariance__\\n\\nThe cross-entropy (CE) loss applies the Softmax operation to the raw logits output from the classifier. The CE loss for an image $i$ with a ground-truth class $c$ is defined as:\\n\\n$\\\\mathcal L_{\\\\text{CE}, i} = -\\\\sum_c\\\\log \\\\frac{\\\\exp\\\\left(\\\\hat y_{c, i}\\\\right)}{\\\\sum_k \\\\exp\\\\left(\\\\hat y_{k, i}\\\\right)} \\\\times {t_{c,i}}$\\n\\nWhen a shift $\\\\delta_i$\\u200b is applied, the logits become $\\\\hat y_{c,i}' = \\\\hat y_{c,i}+\\\\delta_i$. The CE loss then becomes \\n\\n$\\\\mathcal L_{\\\\text{CE}, i}' = -\\\\sum_c\\\\log \\\\frac{\\\\exp\\\\left(\\\\hat y_{c, i}'\\\\right)}{\\\\sum_k \\\\exp\\\\left(\\\\hat y_{k, i}' \\\\right)} \\\\times {t_{c,i}} = -\\\\sum_c\\\\log \\\\frac{\\\\exp\\\\left(\\\\hat y_{c, i} +\\\\delta_i \\\\right)}{\\\\sum_k \\\\exp\\\\left(\\\\hat y_{k, i} + \\\\delta_i \\\\right)} \\\\times {t_{c,i}}$\\n\\nUsing the shift-invariance property of softmax (Equation (2)), we observe:\\n\\n$\\\\mathcal L_{\\\\text{CE}, i}' = -\\\\sum_c\\\\log \\\\frac{\\\\exp\\\\left(\\\\hat y_{c, i} \\\\right)}{\\\\sum_k \\\\exp\\\\left(\\\\hat y_{k, i} \\\\right)} \\\\times {t_{c,i}} = \\\\mathcal L_{\\\\text{CE}, i}$\\n\\nThus, the CE loss remains unchanged under any constant shift $\\\\delta_i$.\\n\\nWe will include this extended explanation in the final version of the paper for added clarity.\\n\\n***\\n\\nWe again thank the reviewer for the feedback and suggestions. We hope our clarifications and expanded analysis adequately addresses all concerns, and we would be grateful if the reviewer considered updating their score.\\n\\n_References_\\n1. M Boehle, M. Fritz, and B. Schiele. B-cos networks: Alignment is all we need for interpretability. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10319\\u201310328, 2022.\"}" ] }
57EjN072hl
COT Flow: Learning Optimal-Transport Image Sampling and Editing by Contrastive Pairs
[ "Xinrui Zu", "Qian Tao" ]
Diffusion models have demonstrated strong performance in sampling and editing multi-modal data with high generation quality, yet they suffer from the iterative generation process which is computationally expensive and slow. In addition, most methods are constrained to generate data from Gaussian noise, which limits their sampling and editing flexibility. To overcome both disadvantages, we present Contrastive Optimal Transport Flow (COT Flow), a new method that achieves fast and high-quality generation with improved zero-shot editing flexibility compared to previous diffusion models. Benefiting from optimal transport (OT), our method has no limitation on the prior distribution, enabling unpaired image-to-image (I2I) translation and doubling the editable space (at both the start and end of the trajectory) compared to other zero-shot editing methods. In terms of quality, COT Flow can generate competitive results in merely one step compared to previous state-of-the-art unpaired image-to-image (I2I) translation methods. To highlight the advantages of COT Flow through the introduction of OT, we introduce the COT Editor to perform user-guided editing with excellent flexibility and quality.
[ "generative models", "consistency models", "diffusion models", "optimal transport" ]
Reject
https://openreview.net/pdf?id=57EjN072hl
https://openreview.net/forum?id=57EjN072hl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vH15vTT6mK", "PoJ5lUKeki", "Kzz883aG6L", "JIUteSGm6G", "98ndX73vyd" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "decision" ], "note_created": [ 1730738289077, 1730107275167, 1733968736084, 1730441737115, 1737524111105 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11210/Reviewer_eUMd" ], [ "ICLR.cc/2025/Conference/Submission11210/Reviewer_v1MD" ], [ "ICLR.cc/2025/Conference/Submission11210/Area_Chair_HhS8" ], [ "ICLR.cc/2025/Conference/Submission11210/Reviewer_pPKV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper present Contrastive Optimal Transport Flow (COT Flow), a method that achieves fast and high-quality generation with improved zero-shot editing flexibility compared to previous diffusion models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written\", \"weaknesses\": \"1.The performance improvement of the paper is not significant.\\n\\n2.The comparison method is outdated; SDedit is a work from two years ago.\\n\\nThis paper has neither impressive results nor significant improvements. I did not find any highlights in this paper, leaning towards a rejection.\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents Contrastive Optimal Transport Flow (COT Flow), a new method that achieves fast and high-quality generation with improved zero-shot editing flexibility compared to previous diffusion models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This work presents Contrastive Optimal Transport Flow (COT Flow), a new method that achieves fast and high-quality generation with improved zero-shot editing flexibility compared to previous diffusion models.\", \"weaknesses\": \"1. Experiments are not enough only on handbag\\u2192shoes (64\\u00d764), CelebA male\\u2192female (64\\u00d764), and outdoor\\u2192church (128\\u00d7128). These tasks are unuseful.\\n2. The images are too small and unclear.\\n3. The comparison methods are too old. It should compare with at least some of the latest text-based image editing methods in 2024.\", \"questions\": \"1. How to use this flow to open-set text-based image editing? What is the source distribution and target distribution?\\n2. What is the role of \\\\phi_\\\\omiga in Eq.10 and 11? Eq.10 and 11 confuse me.\\n3. In Algorithm 1 COT Training, if the encoder learns to output all zero, the loss function will be zero, how to handle this problem?\\n\\nI will be happy to raise the rating if the response is good.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"The paper introduces the Contrastive Optimal Transport Flow (COT Flow), aimed at improving the speed and quality of image generation, with a focus on zero-shot editing flexibility. The model integrates neural optimal transport with consistency models for defining positive pairs in contrastive learning.\", \"***Strength:***\", \"The paper is clearly written and presents a novel integration of neural optimal transport with consistency models.\", \"Demonstrates potential for application in zero-shot image editing scenarios.\", \"***Weaknesses:***\", \"The performance improvement over existing methods is marginal.\", \"Uses outdated comparison methods and lacks significant comparative analysis with recent state-of-the-art models.\", \"Limited scope and depth in experiments which only on small-scale tasks with low-resolution images.\", \"Both qualitative and quantitative evaluations are weak, failing to provide compelling evidence of the model's effectiveness.\", \"The authors did not submit a rebuttal to address the concerns raised during the review process, missing the opportunity to clarify the highlighted issues. Given all the concerns remaining, this paper cannot be accpeted at this time.\"], \"additional_comments_on_reviewer_discussion\": \"There is no rebuttal from the author. All concerns persist and all reviewers gave negative scores.\"}", "{\"summary\": \"This paper proposes the COT Flow model by integrating neural optimal transport and consistency models. Based on the similarities between contrastive learning and consistency models, the authors introduce a new method for defining positive pairs. Furthermore, they demonstrate that the COT Flow can be applied to zero-shot image editing.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed COT Flow model effectively enables unpaired image-to-image translation.\\n2. The COT Flow model shows potential for various zero-shot image editing scenarios.\", \"weaknesses\": \"1. Comparison methods are limited.\\n2. Quantitative evaluations are limited, relying solely on FID scores.\\n3. Qualitative results lack visual impact.\", \"questions\": \"1. Does COT Flow truly address the generative learning trilemma? The comparison only includes FID scores, but if the model claims to tackle the trilemma, it should demonstrate comparisons in terms of sampling speed, quality, and diversity against existing methods.\\n2. What is the training process for the neural optimal transport model?\\n3. Comparison methods are limited. Numerous recent studies on diffusion-based unpaired image-to-image translation tasks should be included, such as [1], [2], and [3]. Additional comparisons with more recent or relevant baselines could strengthen the validity and impact of the findings.\\n\\n[1] Korotin, A., Selikhanovych, D., & Burnaev, E. Neural Optimal Transport. In The Eleventh International Conference on Learning Representations.\\n[2] Su, X., Song, J., Meng, C., & Ermon, S. Dual Diffusion Implicit Bridges for Image-to-Image Translation. In The Eleventh International Conference on Learning Representations.\\n[3] Zhao, M., Bao, F., Li, C., & Zhu, J. (2022). Egsde: Unpaired image-to-image translation via energy-guided stochastic differential equations. Advances in Neural Information Processing Systems, 35, 3609-3623.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
56vHbnk35S
Graph-Guided Scene Reconstruction from Images with 3D Gaussian Splatting
[ "Chong Cheng", "Gaochao Song", "Yiyang Yao", "Qinzheng Zhou", "Gangjian Zhang", "Hao Wang" ]
This paper investigates an open research challenge of reconstructing high-quality, large-scale 3D open scenes from images. It is observed existing methods have various limitations, such as requiring precise camera poses for input and dense viewpoints for supervision. To perform effective and efficient 3D scene reconstruction, we propose a novel graph-guided 3D scene reconstruction framework, GraphGS. Specifically, given a set of images captured by RGB cameras on a scene, we first design a spatial prior-based scene structure estimation method. This is then used to create a camera graph that includes information about the camera topology. Further, we propose to apply the graph-guided multi-view consistency constraint and adaptive sampling strategy to the 3D Gaussian Splatting optimization process. This greatly alleviates the issue of Gaussian points overfitting to specific sparse viewpoints and expedites the 3D reconstruction process. We demonstrate GraphGS achieves high-fidelity 3D reconstruction from images, which presents state-of-the-art performance through quantitative and qualitative evaluation across multiple datasets.
[ "3D Gaussian Splatting", "3D Reconstruction", "NeRF", "Large Scene Reconstruction", "Graph" ]
Accept (Poster)
https://openreview.net/pdf?id=56vHbnk35S
https://openreview.net/forum?id=56vHbnk35S
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zPuBzR6BhG", "xCdBgeWxYF", "s1WwSYqO9W", "rmS9QVlfv2", "rleFfdDEhc", "r7jq7M8vNg", "owcOP4aPmF", "nHilpccmun", "m6cYxEncwd", "VeoK8WidNZ", "VNS96hnTlM", "Mt4lliac9X", "KsY1ZEClUM", "Ezm00xqbyF", "AoMpYFQM6I", "AdGPW6NRbp", "9reNwLQLgD", "34taJUiMoB", "05gT9YCdjr" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732672783279, 1732150398131, 1732150564158, 1732616042881, 1732615458725, 1732150047880, 1732150220670, 1732705103945, 1730055702530, 1732705077396, 1732149745540, 1730706857106, 1734637662014, 1737523836029, 1730702073181, 1732513318636, 1732563811009, 1732149039928, 1732580684573 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_X4qU" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_MKvT" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_X4qU" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_4dAT" ], [ "ICLR.cc/2025/Conference/Submission7394/Area_Chair_aKaV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_MKvT" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Area_Chair_aKaV" ], [ "ICLR.cc/2025/Conference/Submission7394/Authors" ], [ "ICLR.cc/2025/Conference/Submission7394/Reviewer_4dAT" ] ], "structured_content_str": [ "{\"comment\": \"I would like to thank the authors for addressing all review comments in detail and providing additional evidence and clarifications about their method. Please consider revising your camera-ready version based on the review comments to enhance the readability and focus of the paper.\"}", "{\"comment\": \"Thank you for recognizing our ideas, presentation, and experiments.\\n> **Q1: Spelling Errors.**\\n\\n**A:** Thank you for pointing them out; we will carefully make the corrections.\\n\\n> **Q2: Details on acquiring initial camera poses.**\\n\\n**A:** We use only Dust3R\\u2019s pairwise pose estimation module without the Global Alignment module to obtain rough initial poses. Each image pair takes approximately 0.01 seconds on the NVIDIA A6000, with 599 pairs from 600 images in the Waymo dataset and 1999 pairs from 2000 images in the Mill 19 dataset. Given that the dataset\\u2019s initial poses are less accurate than our results, we compare them against COLMAP benchmarks and our final poses in Waymo, as the table below:\\n| | ATE RMSE | PSNR | SSIM | LPIPS |\\n|:-----------: |---------- |:---------: |:---------: |:---------: |\\n| Init. Pose | 2.13 | 23.67 | 0.785 | 0.319 |\\n| Final Pose | 0.04 | **30.36** | **0.891** | **0.267** |\\n\\n> **Q3: Symbol for concentric nearest neighbor pairing.**\\n\\n**A:** Thanks for pointing this out, we will rename S_1, S_2, and S_3. S_1 is related to the nearest neighbor cameras, so we rename it to S_neighbor; S_2 are the cameras with isometric interval, so we rename it to S_concentric; S_3 is designed for connection of camera graph, so we rename it to S_connection.\\n\\n> **Q4: Quadrant filter details.**\\n\\n**A:** The orientation is a 3D vector which is equivalent to the 3-rd column of the extrinsic matrix. This provides convenience for the following cross-product and inner-product. \\n\\n> **Q5: Details of adaptive sampling.**\\n\\n**A:** Degree centrality focuses on the activity level of nodes, while betweenness centrality is concerned with the control a node exerts or its role as a bridge. In our experiments, we found that using only betweenness centrality yielded better results, so we only use betweenness centrality to design node weights.\\n\\n> **Q6: Details on how to obtain the initial pose.**\\n\\n**A:** The Details as below:\\n1. Time and Hardware: Each pair is estimated in 0.01 seconds using an NVIDIA RTX3090 with 24GB of memory. There is no need for Dust3R\\u2019s global alignment, as it relies solely on pairwise pose estimation.\\n2. Datasets: For the Waymo dataset, we use 599 pairs (600 images) for evaluation; For the Mill 19 dataset, we use 1999 pairs (2000 images) to do evaluation.\\n\\nPlease be clarified that Dust3R is not mandatory. Coarse poses from the datasets can also be used directly in our proposed framework. For initial pose accuracy evaluations, please refer to Q2.\"}", "{\"comment\": \"> **Q7: The minimum pose quality, the error tolerance, and the reasons for comparison with COLMAP.**\\n\\n**Q7.1: What quality of poses does the framework require?**\\n\\n**A:** Our goal is to develop a framework for reconstructing outdoor scenes from uncalibrated images. We use coarse pose prior to speed up the Structure-from-Motion (SfM) process and build a graph-guided 3DGS reconstruction. The Dust3R for obtaining rough poses is optional; any coarse pose is suitable, but we choose it for its rapid and widely recognized capabilities. As long as the initial results with correct camera layout on the xy-plane can be used for successful bundle adjustment (BA), we regard them to meet the minimum quality for initial poses. For initial pose quality quantitative comparison, please see Q2.\\n\\n**Q7.2: Are there measures to handle large pose errors?**\\n\\n**A:** Our matching strategy requires only coarse information about the camera\\u2019s position and orientation, specifically the third and fourth columns of the extrinsic matrix. This provides a higher error tolerance; as long as the initial results do not produce severe camera layout errors on the xy-plane, BA can be completed. \\nIf BA proceeds successfully, it will not affect subsequent accuracy estimations. However, how to handle BA failures is beyond the scope of this paper. In the test datasets we used, BA is successfully completed.\\n\\n**Q7.3: Why does it compete with COLMAP?**\\n\\n**A:** The comparison with COLMAP is due to its wide recognition as an SfM benchmark in the community. \\nThe latest SfM methods like VGGSfM [R1], GLOMAP [R2], and ACEZero [R3] all use COLMAP as their primary comparative baseline. We have extensively evaluated these recent methods on the Waymo dataset. \\n| Methods | Time | PSNR | SSIM | LPIPS |\\n|:----------- |--------- |:---------: |:---------: |:---------: |\\n| VGGSfM[R1] | - | - | - | - |\\n| Ace0 [R3] | **10min** | 17.50 | 0.725 | 0.354 |\\n| PixSfM [R4] | 130min | 28.75 | 0.847 | 0.366 |\\n| Glomap [R2] | 28 min | 27.65 | 0.824 | 0.388 |\\n| Colmap | 154min | 29.14 | 0.89 | 0.250 |\\n| Ours | 23 min | **30.36** | **0.891** | **0.267** |\\n\\nAs shown in the table, VGGSfM encounters an out-of-memory (OOM) issue when processing 600 images, ACEZero is fast but fails to converge entirely, and other methods also underperform in pose performance for novel view synthesis compared to our method. Therefore, COLMAP remains the most competitive benchmark for comparison.\\n\\n**Reference**\\n\\n[R1] J. Wang et al., \\u201cVisual Geometry Grounded Deep Structure from Motion,\\u201d arXiv preprint arXiv:2312.04563, 2023. [Online]. Available: https://arxiv.org/abs/2312.04563\\n\\n[R2] L. Pan et al., \\u201cGlobal Structure-from-Motion Revisited,\\u201d arXiv preprint arXiv:2407.20219, 2024. [Online]. Available: https://arxiv.org/abs/2407.20219\\n\\n[R3] E. Brachmann et al., \\u201cScene Coordinate Reconstruction: Posing of Image Collections via Incremental Learning of a Relocalizer,\\u201d arXiv preprint arXiv:2404.14351, 2024. [Online]. Available: https://arxiv.org/abs/2404.14351\\n\\n[R4] P. Lindenberger et al., \\u201cPixel-Perfect Structure-from-Motion with Featuremetric Refinement,\\u201d arXiv preprint arXiv:2108.08291, 2021. [Online]. Available: https://arxiv.org/abs/2108.08291\"}", "{\"comment\": \"We thank you for your valuable comments and feedback. We are pleased to learn that you recognize the contributions of our proposed method in enhancing the quality and efficiency of GS, and we appreciate your affirmation of our new experimental results.\\n\\nWe greatly appreciate your valuable suggestion to focus on the framework and premises of our paper. We understand the importance of clearly presenting the motivation and overarching approach of our method to our readers. To this end, we will refine our paper in the following areas:\\n> **Q1: Detailed description of the experimental setup.**\\n\\nWe will incorporate the following experimental details into our paper.\\nExperiments were conducted using an NVIDIA RTX 3090 GPU and an AMD EPYC 7542 CPU. Dust3R was utilized for initial pose estimation across both the Waymo, Kitti, and Mill 19 datasets, achieving pose estimation within 0.01 seconds per pair. The Waymo includes 600 images from three viewpoints (left, center, right), covering 599 image pairs. The Kitti includes 100 images with 99 image pairs. The Mill 19 comprises 2000 images with 1999 pairs included.\\n\\nFor the CNNP configuration, settings of \\u201cr=5, h=20, w=1\\u201d were employed. \\u201cr=5\\u201d designates the five nearest cameras as matching candidates for each camera c_i . The \\u201ch=20, w=1\\u201d configuration means that one camera is chosen as a match for c_i out of every 20 cameras based on proximity. To optimize multi-view consistency and balance the coefficients, we set the coefficient \\\\lambda to 0.07, which was empirically found to be optimal in our experiments. Additionally, we adopted a graph-guided optimization setup, where the sampling probability is determined by the weights of the graph nodes. We set the minimum sampling probability at 0.5 to ensure that nodes with lower weights are not overlooked.\\n\\nFor comparative experiments with Colmap, the Colmap setup included a vocabulary tree containing 256K visual words, pre-built using the Flickr100k dataset (available on the COLMAP project page).\\n\\n> **Q2: Enhanced Presentation of Empirical Evidence.**\\n\\nWe have added more experimental results, including comparisons with other SFM baseline methods and the outcomes of applying our method to various 3DGS methods. These data further confirm the effectiveness and practicality of our approach, which will be included in a future version of the paper to enhance its persuasiveness.\\n\\n> **Q3: Discussion of limitations and challenges.**\\n\\nAs you suggested, we will add a section discussing the limitations and challenges of our method. \\n\\nOur research focuses on 3DGS reconstruction of outdoor, unbounded scenes, primarily facing two challenges: 1) Accurate pose estimation outdoors is difficult due to the unpredictability and complexity of outdoor environments; 2\\uff09Outdoor scenes generally have sparser camera coverage and less overlap between images compared to indoor settings, resulting in insufficient constraints during training. To address these challenges, we introduced two key modules: Spatial Prior-Based Structure Estimation and a graph-guided Gaussian optimization strategy. These modules are designed to efficiently and accurately complete the reconstruction of outdoor scenes.\\n\\nHowever, our method shows limited improvement in 3DGS reconstruction of objects. This is primarily because datasets in this category usually have precise ground truth (GT) poses, and the camera setups often rotate 360 degrees around the object, providing sufficient image overlap and constraints to aid convergence. This results in nearly equal importance weights for graph nodes, which diminishes the impact of our optimization.\\n\\nMoreover, our approach relies on the accuracy of rough spatial priors. If the initial camera pose distribution in the xy plane is inaccurate, it may lead to Bundle Adjustment (BA) failure. In such cases, our method might not effectively handle pose estimation errors.\\n\\n\\n\\n\\n**We value the insights your feedback has provided and are committed to continually refining our work. By conducting additional experiments and evaluations, we believe we can demonstrate that our contributions are substantial and highly relevant to practical 3DGS reconstruction tasks. We look forward to further discussions and appreciate the opportunity to strengthen our submission.**\"}", "{\"comment\": \"Thank the authors for their detailed rebuttal. Based on the responses, most of my concerns have been addressed. This work focuses more on large-scale pipeline improvements, introducing a series of practical enhancements, particularly in the SFM component to provide better initial values for GS with faster speed. While its technical novelty is relatively limited, it offers valuable practical contributions.\"}", "{\"comment\": \"> **Q6: Provide results on the MipNeRF360 dataset.**\\n\\n**A:** We consider the comparison with the original 3DGS not as a direct comparison, but as a type of ablation study. Our framework can be seamlessly integrated into methods such as 3DGS, 2DGS, Scaffold, OctreeGS, etc. Due to time constraints, all other results come from those published in OctreeGS. We provide the results with the original 3DGS method on the Mip360 dataset as follows:\\n| Methods | PSNR | SSIM | LPIPS |\\n|:-------------: |:---------: |:---------: |:---------: |\\n| 3D-GS | 27.54 | 0.815 | 0.216 |\\n| Mip-NeRF360 | 27.69 | 0.792 | 0.237 |\\n| 2D-GS | 26.93 | 0.800 | 0.251 |\\n| Mip-Splatting | 27.61 | 0.816 | **0.215** |\\n| Scaffold-GS | 27.90 | 0.815 | 0.220 |\\n| Octree-GS | 28.05 | 0.819 | 0.214 |\\n| Ours w/ 3DGS | **28.71** | **0.868** | 0.216 |\\n\\n> **Q7: Contributions are dispersed and not fully evaluated with SoTA methods.**\\n\\n**A:** Thank you for your positive feedback on the improvements highlighted in our paper. Please be clarified that our research is dedicated to reconstructing outdoor 3DGS scenes from unaligned images, targeting the complexities of 3DGS and outdoor settings. As shown in Q5, our graph-guided optimization module has a significant improvement over various SOTA methods. To address your concerns, we conducted additional experiments and will include these comparisons in future versions.\\n\\n> **Q8: What does \\\"w/o structure estimation\\\" mean in Table 4.**\\n\\n**A:** This means we only use Waymo\\u2019s original poses, not the estimated pose by our structure estimation module, for 3DGS reconstruction.\\n\\n> **Q9: Which datasets are used for results in Table 4, Table 5, and Table 7.**\\n\\n**A:** Tables 4, 5, and 7 all use the Waymo datasets for evaluation, with each segment containing 600 images.\\n\\n> **Q10: Dust3r Details.**\\n\\n**A:** We solely use Dust3R\\u2019s pairwise pose estimation.\\n\\n> **Q11: Definition of S_3^i in Eqn 1.**\\n\\n**A:** In the CNNP part, we traverse all of the cameras (from 1 to N, supposing current camera as c_i) and calculate which camera (c_j) should form a matching pair with c_i. S_3 means for c_i, we always add c_(i-1) in the traversal loop to form a matching pair. This can guarantee the camera graph must be a connected graph as illustrated in Remark.\\n\\n> **Q12: Definition of a Large-Scale Scene.**\\n\\n**A:** To avoid confusion, we will remove it.\\n\\n> **Q13: Clarification of Formula 8.**\\n\\n**A:** Thank you for pointing that out; we will make the corrections.\\n\\n\\n\\n**Reference**\\n\\n[R1] J. Wang et al., \\u201cVisual Geometry Grounded Deep Structure from Motion,\\u201d arXiv preprint arXiv:2312.04563, 2023. [Online]. Available: https://arxiv.org/abs/2312.04563\\n\\n[R2] L. Pan et al., \\u201cGlobal Structure-from-Motion Revisited,\\u201d arXiv preprint arXiv:2407.20219, 2024. [Online]. Available: https://arxiv.org/abs/2407.20219\\n\\n[R3] E. Brachmann et al., \\u201cScene Coordinate Reconstruction: Posing of Image Collections via Incremental Learning of a Relocalizer,\\u201d arXiv preprint arXiv:2404.14351, 2024. [Online]. Available: https://arxiv.org/abs/2404.14351\\n\\n[R4] P. Lindenberger et al., \\u201cPixel-Perfect Structure-from-Motion with Featuremetric Refinement,\\u201d arXiv preprint arXiv:2108.08291, 2021. [Online]. Available: https://arxiv.org/abs/2108.08291\"}", "{\"comment\": \"Thank you for recognizing the clarity of our paper and the effectiveness of our methods.\\n\\n> **Q1: A detailed explanation of Equation (1).**\\n\\n**A:** Thanks for your suggestion, we will add more detailed explanation in the revision. In Equation 1, we traverse all of the cameras (from 1 to N, supposing the current camera as c_i) and calculate which camera c_j should form a matching pair with c_i. We have the following steps:\\n1. For c_i, we sort other cameras based on their distance to c_i\\n2. We add c_i 's nearest r cameras, as S_1\\n3. We add w cameras from every h+w camera based on the same sorted order, as S_2.\\n4. We add c_(i-1) in the main loop to match c_i, as S_3\\n\\n> **Q2: Compare the accuracy of the initial values.**\\n\\n**A:** We provide initial pose accuracy evaluations in Waymo in the table below:\\n\\n| Methods | ATE RMSE | PSNR | SSIM | LPIPS |\\n|:----------- |---------- |:---------: |:---------: |:---------: |\\n| Init. Pose | 2.13 | 23.67 | 0.785 | 0.319 |\\n| Final. Pose | 0.04 | **30.36** | **0.891** | **0.267** |\\n\\nOur previous experiments indicate that the true poses provided by the dataset perform poorly. Therefore, we compare both the initial poses and the final results optimized by our pose estimation module against the COLMAP results as a benchmark. This comparison validated our method\\u2019s ability to significantly enhance accuracy.\\n\\n> **Q3: Tolerance of Spatial prior-based structure and graph construction.**\\n\\n**A:** Sure, inaccurate initial results do lead to poorly constructed graph. However, our matching strategy requires only coarse information about the camera\\u2019s position and orientation, specifically the third and fourth columns of the extrinsic matrix. This offers a higher tolerance for errors; as long as the initial results with correct camera layout on the xy-plane can be used for successful bundle adjustment (BA), we regard them to meet the minimum quality for initial poses.\\n\\nIf BA succeeds, it means the camera graph has already passed the BA test, and the graph topology has no big problem. However, how to handle BA failures is beyond the scope of this paper. \\n\\n> **Q4: Comparison of FPS with 3DGS.**\\n\\n**A:** In our experiments, FPS refers solely to the rendering speed after training completion, with 3DGS and our results remaining within a reasonable fluctuation range. The pose estimation method you mentioned affects the overall training time, which should be compared with the 3DGS + Colmap pipeline. As shown in the table below, we significantly lead in both efficiency and quality in Waymo:\\n| Images | Methods | SfM Time | Training Time | Total Time | PSNR | SSIM | LPIPS |\\n|-------- |:------------- |:--------: |:-------------: |:----------: |------- |------- |------- |\\n| 0.6k | Colmap + 3DGS | 154min | 54min | 208min | 28.81 | 0.884 | 0.253 |\\n| 0.6k | Ours | 23min | 28min | 51min | 30.36 | 0.891 | 0.267 |\"}", "{\"comment\": \"Thank you for your positive feedback and constructive comments. We appreciate your recognition of our clarifications and proposed ideas.\\nIn response to your suggestions, we will enhance the clarity and precision of our method descriptions, provide detailed methodological and experimental information for reproducibility, and reinforce the presentation of our results to emphasize the efficacy and practical utility of our approach. \\nWe value your guidance and are committed to presenting an improved version.\"}", "{\"summary\": \"The authors propose a framework for large-scale 3D reconstruction using Gaussian splatting. The authors suggest the construction of a prior graph-guided scene structure. This results in estimating a camera graph that encodes the camera topology. Then based on graph weights the employ an adaptive sampling strategy to the 3D Gaussian splatting optimization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors present an elaborate pre-processing framework to effectively scale Gaussian splitting to large scenes. The ideas presented in the paper are exciting and well-presented for the most part. The concept of exploiting low-cost prior heuristics to allow the network to focus on the underlying task is interesting and the experimental evaluation demonstrates the effectiveness of the method.\", \"weaknesses\": \"1) minor: There are some spelling errors, please proofread the manuscript e.g ( line 135 formed -> form, line 138 initializaition -> initialization )\\n\\n2) Line 147 - 149: Please specify here which models you use to obtain camera poses. How approximate are these poses? What is the error tolerance? Could you please provide quantitative metrics on the initial pose quality compared to ground truth if available?\\n\\n 3) For the concentric nn pairing the authors use the symbol S multiple times. It would make sense to use CNNR as a symbol of the overall process output and use more meaningful names than S1 S2 and S3 for the various heuristics. Maybe names related to their role in CNNR computation. \\n\\n4) In the quadrant filer could you please specify whether the orientation is provided as a normal or any other form (euler angles)?\\n\\n5) The adaptive sampling section is not clear. \\n - line 322 primarily considers two criteria -> Are there more criteria than these two?\\n - Line 344 We design node weight wn(i) based on betweenness centrality -> So the node weight does not take into account the degree centrality? \\n - How is the view selection probability integrated into the 3DGS optimization? \\n\\n6) Lines 457-458: Could you please provide more information on how you obtained your initial poses and what is the size of the dataset, the hardware used, or any preprocessing stats that the relative pose estimation is done in 10ms?\", \"questions\": \"What is the minimum quality of poses required by the proposed framework?\\nDoes any of the proposed steps account for large errors in the pose estimation? \\nWhy do the authors attempt to compete strongly with COLMAP? COLMAP was used as a method to obtain initial poses and the authors used wang et al. 2024 to obtain initial poses. The framework the authors propose can be used with poses computed by either method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for providing valuable feedback. We are pleased to know that our responses have addressed most of your concerns and appreciate your recognition of our pipeline improvements.\", \"we_would_like_to_take_this_opportunity_to_further_clarify_and_emphasize_the_technical_contributions_and_novelty_of_our_work\": \"1. We propose a novel spatial prior-based structure estimation method, where the proposed Quadrant Filter rapidly computes the 6-bit encoded relative position and orientation. It significantly improves estimation speed and robustness in outdoor settings.\\n\\n2. We are the first to involve graph information into the 3DGS optimization process, such that the overfitting caused by limited constraints in outdoor scenes is mitigated. This strategy significantly enhances reconstruction quality and is applicable to various 3DGS-based methods.\\n\\nWe present extensive experiments and convincing mathematical proof. Our method can also effectively process uncalibrated images from the internet to reconstruct outdoor scenes.\\n\\nThank you for your attention and feedback. We hope this clarifies concerns regarding our method\\u2019s novelty and highlights our contributions to the field. We appreciate your reconsideration of our work based on these points.\"}", "{\"comment\": \"> **Q4: Comparison with naive overlapping frusta.**\\n\\n**A:** Thanks for your suggestion, we list the results of our method compared to the naive overlapping frusta method.\\n| Images | Method | Pairs computing T | Matching Time | BA Time | PSNR | SSIM | LPIPS |\\n|--------|---------------|-------------------|---------------|---------|------|------|-------|\\n| 0.6 k | ours (CNNP+QF)| 0.2 min | 3 min | 20 min | 30.18| 0.9 | 0.24 |\\n| 0.6 k | frusta | 2 min | 45 min | 86 min | 30.15| 0.89 | 0.24 |\\n| 2 k | ours (CNNO+QF)| 2.1 min | 12 min | 194 min | 26.6 | 0.85 | 0.16 |\\n| 2 k | frusta | 20 min | >24 h | >24 h | - | - | - |\", \"we_implement_the_frusta_algorithm_with_the_following_steps\": \"1. The intrinsic matrix of camera is obtained to calculate the near-far plane.\\n2. The frusta of cameras is transformed to world coordinate system.\\n3. We check the intersection of their bounding box to judge the view intersection.\\n\\n**Analysis**:\\n1. (Universality) Notably, our matching strategy only needs the position and orientation of coarse cameras, i.e., only the 3-rd, 4-nd colume of extrinsic matrix. Moreover, we do not need the information of intrinsic matrix. This indicates the universality of our method.\\n2. (Efficiency) As shown in QF section and Appendix C, we calculate the relative orientation with only 1 inner product and 1 cross-product based on fully convincing mathematical proof. As shown in column of pairs computing time, this way greatly improves our efficiency compared to frusta method, which contains time-consuming coordinate transform and intersection checking process.\\n3. (Reconstruction Quality) In terms of reconstruction quality, there is no obvious difference between our method and frusta, since pose accuracy is well-solved in BA procedure. Our approach offers better compatibility in cases where initial poses are poor.\\n4. (Reason why frusta failed) Though the naive frusta method sounds reasonable, however, in practice, especially in the driving scenario with long and narrow camera track, the frusta method cannot filter image pairs efficiently since all of the camera orientation are extremely similar.\\n\\n> **Q5: Comparison provided with original 3DGS instead of other state-of-the-art methods.**\\n\\n**A:** As shown in the table below, our framework can seamlessly integrate other 3DGS improvement methods and significantly enhance performance. The experiments are conducted on the Waymo dataset. This primarily relies on our graph-guided optimization module. \\n| Methods | PSNR | SSIM | LPIPS |\\n|:------------------------- |:---------: |:---------: |:---------: |\\n| Scaffold w/ Original Pose | 23.67 | 0.784 | 0.320 |\\n| Scaffold w/ Colmap | 32.02 | 0.921 | 0.206 |\\n| Scaffold w/ Ours | 33.9 | 0.923 | 0.205 |\\n| 2DGS w/ Original Pose | 19.39 | 0.624 | 0.705 |\\n| 2DGS w/ Colmap | 28.85 | 0.879 | 0.284 |\\n| 2DGS w/ Ours | 31.88 | 0.902 | 0.236 |\\n| OctreeGS w/ Original Pose | 21.83 | 0.726 | 0.404 |\\n| OctreeGS w/ Colmap | 29.03 | 0.870 | 0.295 |\\n| OctreeGS w/ Ours | 31.90 | 0.903 | 0.252 |\\n\\nTo clearly distinguish our contributions and ensure the fairness of comparisons, we opt not to publish results in paper based on other latest state-of-the-art (SOTA) 3DGS methods. Additionally, because we do not segment into chunks, we can not directly apply our method to Hierarchical 3D Gaussian representations. Relevant results will be added in a future version.\"}", "{\"summary\": \"The paper presents a collection of practical optimizations to improve quality and efficiency of Gaussian Splatting reconstructions from image collections without poses. The optimizations relate to\\n1. Efficient View-Pair Finding for Match-graph Construction for Structure from Motion\\n2. Octree initialization of 3D points and Level-of-details based pruning\\n3. Multi-view Consistency Loss in 3DGS optimization\\n4. Match-graph / Camera-graph Importance based View Sampling for 3DGS optimization\\n\\nThe author show results indicating, \\nOptimization 1 leads to faster SfM (Table 3) and quality improvements in GS (Table 4)\\nOptimizations 2, 4 leads to faster GS optimization (Table 5, Table 7)\\nOptimizations 3, 4 leads to quality improvements in GS (Table 4, Table 7)\\n\\nThe results are evaluated on scenes from Waymo, Kitti, and Mill-19 datasets with images in the range of ~600, ~100, and ~2000.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, I like that the authors propose very practical optimizations that bring both quality and run-time improvements.\", \"Originality: The key original contribution of the paper is: Concentric Nearest Neighbor Pairing (CNNP) and Quadrant Filter (QF) organizing/pruning view-pairs in camera-graph. Other contributions are good practical applications of previously known ideas.\", \"Clarity: The paper is written in a clear language, structured well, and shows experimental validation of the proposed ideas.\", \"Significance: The paper introduces practical ideas for 3DGS from no-pose image collections.\"], \"weaknesses\": \"The method proposed in the paper is forming camera-graph using Dust3r to find relative poses between image pairs and pruning pairs using the proposed CNNP and QF steps. The efficiency of structure estimation is estimated w.r.t default COLMAP pipeline (assuming incremental Structure from Motion). This is not a fair comparison and sufficient details are not provided, making it harder to assess the true benefit of the proposed improvements.\\n\\n- Paper mentions that Dust3r is used to estimate pairwise relative poses in 0.01 seconds. Can you provide a more detailed breakdown of the Dust3r usage, including whether the 0.01 seconds is per pair or total, what specific hardware was used, and how many total pairs were evaluated across the different datasets\\n\\n- COLMAP exhaustive and COLMAP vocab-tree matching time are provided. However, sufficient details on the experimental setup and compute resources are not provided. For example, for vocab-tree matching, which dataset is used to compute the vocab-tree, how many nearest neighbors are retrieved per image, in total how many pairs are evaluated? What compute resources are used for this matching? \\n\\nWithout these details, it is difficult to draw conclusions. \\n\\nAt a more basic level, Dust3r + CNNP + QF contributions are mainly to improve match-graph construction and BA runtime. A fair comparison of the improvements in these runtimes should be with other SoTA efficient SfM methods, not default COLMAP. \\n\\nDefault incremental SfM implemented with COLMAP is commonly used by radiance field papers to compute poses but by no means this is the most efficient pipeline. There is a vast literature on how to approximate match-graph construction going back a decade. There are well-established alternatives to incremental Structure from Motion with implementations in Open source SfM libraries such as OpenMVG, OpenSfM, Theia, and most recently GLOMAP that offer much better run-time behavior. \\n\\nGiven the authors use prior poses estimated from Dust3r, the comparison of CNNP and QF steps should be done with match-graph pruning methods that already use prior poses. A naive baseline to compare against would be to construct a camera-graph only from view pairs with overlapping frusta. Can you add this comparison to your evaluation, evaluating both quality and run-time?\\n\\nI like the practicality of proposed ideas but I don't think that they are contextualized and compared correctly. The other ideas such as LOD-based point pruning and view-importance based sampling are nice practical improvements which provide qualitative and runtime gains w.r.t. original 3DGS paper, however 3DGS provides a baseline not SoTA comparison. A number of methods have been proposed since the original paper to improve both, the quality and efficiency of 3DGS (2DGS, RadSplats, . A few relevant to the paper: Hierarchical 3D Gaussian Representation (Kerbl et al SIGGRAPH Asia 2024), Scaffold-GS (Lu et al CVPR 2024), Octree-GS (Ren et al). \\n\\nThe authors can also provide results on MipNeRF360 dataset which is used more commonly in radiance field literature, this will make it easier to compare their results against contemporary 3DGS methods.\\n\\nAs is, the paper is an assortment of good practical improvements for a sparse recon + 3DGS reconstruction system, and I am positive that these insights can be valuable for practitioners in the field. However, these small contributions are scattered across the pipeline and none are evaluated as thoroughly as they should be with SoTA methods and good baselines respectively for each, making it difficult to place the value/significance of contributions in context of SoTA.\", \"questions\": [\"See questions in Weaknesses sections.\", \"What does \\\"w/o structure estimation\\\" mean in Table 4.\", \"Which datasets are used for results in Table 4, Table 5, Table 7.\", \"Is Dust3r used for only pairwise pose estimation or is the step that yields globally aligned point maps and poses also used?\", \"Definition of S_3^i in Eqn 1. is ambiguous, can you clarify what does this set include?\"], \"minor\": [\"Abstract mentions that : \\\"This paper investigates ... reconstructing high-quality, large-scale 3D open scenes from images.\\\" but the paper deals with scenes with 100 images, 600 images, and two scenes with 1500-2000 images. Typically large-scale in context of SfM and 3DGS refers to city-scale scenes with tens of thousands of images.\", \"It should be clarified if I(p) in Eqn 8 means \\\"intensity or color at pixel p in the image\\\" refers to GT image or rendered image.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The submission received positive reviews from all the reviewers. The reviewers generally appreciate the clarity and recognize the significance and experiments of the work. After reading the paper, the reviewers' comments and the authors' rebuttal, the AC agrees with the decision by the reviewers and recommends acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised questions regarding insufficient experimental comparisons (4dAT, MKvT, X4qU) and clarity on details (4dAT, MKvT, X4qU). The questions were adequately addressed by the authors. The AC agrees with the reviewers' evaluation that the paper should be accepted.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"Focus on the large-scale Gaussian-based reconstruction pipeline, the authors propose a graph-guided framework, GraphGS, which leverages spatial priors for scene structure estimation to create a camera graph encoding camera topology. Using graph-guided multi-view consistency and an adaptive sampling strategy, GraphGS enhances the 3D Gaussian Splatting optimization, mitigating overfitting to sparse viewpoints and accelerating reconstruction. Quantitative and qualitative evaluations across multiple datasets demonstrate that GraphGS achieves state-of-the-art performance in 3D reconstruction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. the paper is easy to understand.\\n2. the results show the effectiveness of proposed pipeline.\", \"weaknesses\": \"1. The manuscript lacks a detailed explanation for each term in Equation (1), which would enhance clarity and understanding.\\n2. Many of the authors' methods are designed to improve upon COLMAP. It would be beneficial to include experiments comparing the accuracy of initial values in GS, such as pose accuracy, to illustrate the improvements.\", \"questions\": \"1. If the spatial prior-based structure relies on the initial pose estimation, wouldn\\u2019t inaccurate initial results lead to a poorly constructed graph?\\n2. In Table 1, why doesn\\u2019t the FPS of the proposed method exceed that of the original 3D Gaussian Splatting (3DGS)? Intuitively, the pose estimation should contribute positively. Where is the additional computation time being spent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers\\n\\nThank you for your efforts in reviewing our submission. As the rebuttal deadline approaches, your feedback would be greatly appreciated and highly valuable. We have carefully addressed the comments received so far and hope for the opportunity to engage with your insights as well. Your comments are invaluable for improving our work and fostering meaningful discussions.\\n\\nThank you again for your time and contribution to the review process!\\n\\nBest regards\"}", "{\"title\": \"Last day for interactive discussions!\", \"comment\": \"Dear authors and reviewers,\\n\\nThe interactive discussion phase will end in one day (November 26). Please read the authors' responses and the reviewers' feedback carefully and exchange your thoughts at your earliest convenience. This would be your last chance to be able to clarify any potential confusion.\\n\\nThank you, \\nICLR 2025 AC\"}", "{\"comment\": \"Thank you for your detailed feedback. We appreciate your recognition of the practicality of our method. To our best knowledge, we are the first to propose a graph-guided 3DGS optimization method for outdoor scenes, which can effectively accelerate and enhance the performance of 3DGS methods such as 3DGS, OctreeGS, ScaffoldGS, 2DGS, etc.\\n\\n> **Q1: Implementation Details of Dust3R.**\\n\\n**A:** The Details as below:\\n1. Time and Hardware: Each pair is estimated in 0.01 seconds using an NVIDIA RTX3090 with 24GB of memory. There is no need for Dust3R\\u2019s global alignment, as it relies solely on pairwise pose estimation.\\n2. Datasets: For the Waymo dataset, we use 599 pairs (600 images) for evaluation; For Mill 19 dataset, we use 1999 pairs (2000 images) to do evaluation.\\nPlease be clarified that Dust3R is not mandatory. Coarse poses from the datasets can also be used directly in our proposed framework.\\n\\n> **Q2: Experiment Details for COLMAP.**\\n\\n**Q2.1: Which dataset is used to compute the vocab-tree\\uff1f**\\n\\n**A:** Dataset Settings: We use the Waymo dataset, which contains 600 images, selecting three views (left, middle, right) for each. We also use the Building dataset from Mill-19, which contains 2000 images.\", \"the_setting_of_vocab_tree\": \"The vocab tree is pre-build on the Flickr100k dataset, which is provided officially on https://demuc.de/colmap/ . We use the item \\\"Vocabulary tree with 256K visual words\\\" for evaluation.\\n\\n**Q2.2: How many nearest neighbors are retrieved, and how many pairs are evaluated in total?**\\n\\n**A:** The setting of CNNP is \\\"r=5, h=20, w=1\\\", which is an empirical setting with a lower failure rate of BA. Under this setting, ~ 18k / $C_{600}^2$ pairs will be selected for 0.6k images; ~ 200k / $C_{2000}^2$ pairs will be selected for 2k images.\", \"explanation_of_parameters_of_cnnp\": \"1. \\\"r=5\\\" represents for every camera c_i, the nearest 5 cameras to c_i will be selected as matching pairs\\n2. \\\"h=20, w=1\\\": For camera c_i, we select 1 camera from every 21 cameras as matching pairs. This selection is based on distance order to c_i too. As illustrated in the main paper Fig 2. (left)\\n\\n**Q2.3: Computing Resources.**\\n\\n**A:** On GPU NVIDIA RTX3090 and CPU AMD EPYC 7542, COLMAP\\u2019s SfM time is 104 minutes for the Waymo dataset and exceeds 24 hours for the Mill 19 dataset. In contrast, our method\\u2019s SfM time is 23 minutes for the Waymo and 206 minutes for the Mill 19.\\n\\n> **Q3: Comparison with Other Existing Efficient SfM Methods.**\\n\\n**A:** Despite numerous efficient SfM methods being proposed, COLMAP remains widely recognized by the community and is considered a crucial benchmark for comparison. As mentioned in GLOMAP [R2], \\u201cGLOMAP achieves a similar level of robustness and accuracy as state-of-the-art incremental SfM systems (COLMAP, Sch\\u00f6nberger & Frahm) while maintaining the efficiency of global SfM pipelines.\\u201d \\nMoreover, the latest SfM methods like VGGSfM [R1], GLOMAP [R2], and ACEZero [R3] also use COLMAP as their primary comparative baseline. We have extensively evaluated these recent methods on the Waymo dataset.\\n| Methods | Time | PSNR | SSIM | LPIPS |\\n|:---------: |--------- |:---------: |:---------: |:---------: |\\n| VGGSfM[R1] | - | - | - | - |\\n| Ace0 [R3] | **10min** | 17.50 | 0.725 | 0.354 |\\n| PixSfM [R4] | 130min | 28.75 | 0.847 | 0.366 |\\n| Glomap [R2] | 28 min | 27.65 | 0.824 | 0.388 |\\n| Colmap | 154min | 29.14 | 0.89 | 0.250 |\\n| Ours | 23 min | **30.36** | **0.891** | **0.267** |\\n\\nAs shown in the table, VGGSfM encounters an out-of-memory (OOM) issue when processing 600 images, ACEZero is fast but fails to converge entirely, and other methods also underperform in pose performance for novel view synthesis compared to our method. Due to the earlier publication dates of OpenMVG, OpenSfM, and TheiaSfM that the reviewer mentioned, along with the time constraints of the rebuttal, we focus our comparison on the latest GLOMAP and other recent methods for this stage. More results will be added in a future version later.\"}", "{\"title\": \"Thoughts on new experiments and results\", \"comment\": \"I would like to thank the authors for answering many of the questions with concrete experimental results in such a short period of time. Based on the new results shared by the authors, many of my concerns are satisfied. It is particularly promising that the proposed collection of ideas improve GS quality in conjunction with multiple methods apart from vanilla 3DGS. I think the SfM runtime comparison with other baselines also provide good evidence that the improvements add to efficiency in a notable manner.\\n\\nGiven the new experiments and results, my main suggestion to the author would be to pay attention to the framing and premise of the proposed method. In my opinion, this is a paper of high practical value, it's a number of small algorithmic changes that collectively lead to notable improvements. I recommend authors to provide as much detail as possible of their practical setup, empirical evidence, honest discussions of any limitations they observed (for example authors shared the challenges brought on by differences between autonomous driving datasets vs. more typical 360 datasets), etc. Such discussion will make the paper stronger and add to its value.\"}" ] }
56mg1JFd3n
Writing in the Margins: Better Inference Patterns for Long-Context Retrieval
[ "Melisa Russak", "Umar Jamil", "Christopher Bryant", "Kiran Kamble", "Axel Magnuson", "Mateusz Russak", "Waseem Alshikh" ]
In this paper, we introduce Writing in the Margins (WiM), a new inference pattern for Large Language Models designed to optimize the handling of long input sequences in retrieval-oriented tasks. This approach leverages the chunked prefill of the key-value cache to perform segment-wise inference, which enables efficient processing of extensive contexts along with the generation and classification of intermediate information ("margins") that guide the model towards specific tasks. This method increases computational overhead marginally while significantly enhancing the performance of off-the-shelf models without the need for fine-tuning. Specifically, we observe that WiM provides an average enhancement of 7.5% in accuracy for reasoning skills (HotpotQA, MultiHop-RAG) and a 30.0% increase in the F1-score for aggregation tasks (CWE). Additionally, we show how the proposed pattern fits into an interactive retrieval design that provides end-users with ongoing updates about the progress of context processing, and pinpoints the integration of relevant information into the final response. We release our implementation of WiM using Hugging Face Transformers library at <anonymised URL>.
[ "chunked prefill", "long context inference", "interactive inference" ]
Reject
https://openreview.net/pdf?id=56mg1JFd3n
https://openreview.net/forum?id=56mg1JFd3n
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vjAnkJgjtY", "sSV9TuKcis", "aywQOw2ErJ", "ampRSo47Ia", "YOlDNTXsjR", "YJVJHWLM1K", "QAiAuquPL4", "J9lcHpJtOo", "HPNDWV6Jtp", "FW4th5X1ke", "CDaizZ1l4L", "A3G2Z5LNaB", "7ExGXKZZGl", "24vnXQ5vvD", "0EHUtWPrsd" ], "note_type": [ "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523632086, 1731952642144, 1732116481034, 1734795522142, 1732123801602, 1732630118993, 1732546368513, 1732165508172, 1732624298583, 1730646376785, 1730703696947, 1731952132416, 1731951395326, 1730672052156, 1730369054917 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Area_Chair_vZ2F" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_ztVt" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_ztVt" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_ztVt" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_Babj" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_Jf6u" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Authors" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_ztVt" ], [ "ICLR.cc/2025/Conference/Submission4307/Reviewer_uuU8" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your time in reviewing our paper. To answer your comments/questions:\\n\\nIn terms of adding baselines, we chose seven of the best performing off-the-shelf models that supported a context window of 128k (based on the LMSYS Chatbot Arena at the time of our study). We also only had access to a single 8xH100 node, so were unable to test some of the largest models (e.g. Llama 450B). Within these constraints, are there any specific baselines you think we should include that would make the paper stronger?\\n\\nAs for a deeper analysis, we briefly mentioned on Line 362 that WiM tends to increase the verbosity of LLM answers, which is at odds with SQuaD that typically expects short answers. We agree this is an important observation and so will expand the analysis in this section.\"}", "{\"comment\": \"Thank you for your time in reviewing our paper. To answer your comments/questions:\\n\\nWe regret that you found our hypotheses unclear, but are correct in that our fundamental hypothesis is that long-context inference can be improved when the context is processed with intermediate margin notes rather than all at once. We can easily make this more explicit in the paper.\", \"the_3_experimental_settings_are_basically\": \"1. LLM: The model is given the full, long context and asked to answer a question. \\n2. RAG: The model is given the full, long context from which it first classifies chunks as relevant/irrelevant to a question, and then second answers the question based only on the relevant chunks.\\n3. WiM: The model is given the full, long context and generates margin notes for each chunk (similar to summarising chapters in a book) which are concatenated to the full context if they are considered relevant to a question, and then the model answers the question based on the full context + margin notes. \\n\\nIn terms of results, it feels important to clarify that each cell in Table 4 represents how well a model was able to answer a question given a context. All of these results are deterministic (temperature = 0), as the model was either able to generate an answer that matched the reference, or not. It's thus not clear to us how statistical significance tests would help in this context. \\n\\nFinally, the ablation studies just present variations of the main experiment. In the main experiment, WiM only makes use of margins if they are classified as relevant to the question, so we wanted to show that appending *all* margins, both relevant and irrelevant, harmed performance (Section 5.1). Similarly, Section 5.2 shows what happens if the model only has access to the margin notes when answering the question (i.e. without the full original context), which demonstrates that WiM works best 1) when irrelevant margin notes are excluded, and 2) when margin notes are appended to the full context. \\nSection 6 simply lists the advantages of the WiM inference pattern in terms of a real-world use-case with human users.\"}", "{\"metareview\": \"This paper proposes a new method called Writing in the Margins (WiM) to improve inference for long-context retrieval. The strategy presented can be applied to different foundation models and requires minimal additional computation. Although a reviewer gave a very high score, after reading the reviews, discussions, and the paper itself, I think this paper has the following key weaknesses: 1) The experimental section lacks sufficient baseline comparisons, which was pointed out by multiple reviewers who provided different suggestions; 2) The current test samples are too few. Although multiple datasets are used, only 100 cases were selected from each dataset, which is insufficient to validate and demonstrate the method's effectiveness; 3) There is a lack of comparative and analytical experiments. While the authors responded during the rebuttal phase, the issues mentioned above were not addressed. Therefore, I think this paper is not ready for publication by far.\", \"additional_comments_on_reviewer_discussion\": \"The authors responded to the reviewers' concerns during the rebuttal phase, but they were unable to address the reviewers' concerns.\"}", "{\"comment\": \"Thanks for your reply. I just want to be sure I understand your results. The table's cells are average accuracies across the set of evaluation examples described in 3.1? And you are comparing those averages in your analysis?\"}", "{\"comment\": [\"The submission implicitly poses the hypothesis \\\"long-context inference can be improved when the context is processed with intermediate margin notes rather than all at once\\\". A set of experiments were conducted to test this hypothesis. Statistical significance testing provides insight into whether the observed means support the hypothesis. In addition to the reference provided in the original review, I strongly encourage the authors to review the following for more motivation,\", \"Rainio, O., Teuho, J. & Kl\\u00e9n, R. Evaluation metrics and statistical tests for machine learning. Sci Rep 14, 6086 (2024). https://doi.org/10.1038/s41598-024-56706-x\", \"Rotem Dror, Gili Baumer, Marina Bogomolov, and Roi Reichart. Replicability analysis for natural language processing: testing significance with multiple datasets. Transactions of the Association for Computational Linguistics, 5:471--486, 2017.\", \"Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th annual meeting of the Association for Computational Linguistics (volume 1: long papers), 1383--- 1392, Melbourne, Australia, July 2018. , Association for Computational Linguistics.\", \"Stefan, Angelika M. and Sch\\u00f6nbrodt, Felix D. Big little lies: a compendium and simulation of p-hacking strategies. R. Soc. Open Sci.10220346. 2023.\", \"Performing the appropriate hypothesis tests is necessary for empirical results unless the authors want the contribution to be evaluated along other dimensions.\"]}", "{\"comment\": \"Great! That should be the correct information to statistically test your hypothesis. Please see the suggestions in the original review.\"}", "{\"comment\": \"Yep, please ask if anything is unclear!\\n\\nEach cell represents performance over 100 questions for each dataset/context length. So, for example, in the top left cell, we asked Phi3-small to answer 100 HotpotQA questions given up to 16k context, and it correctly answered 47/100. The cell below asks the same model to answer the same questions, except using RAG, and successfully answered 55/100. The set of 100 questions+contexts is constant for each column. \\n\\nOnly the rightmost column and bottom row (both coloured blue) contain averages. For example, Phi3-small achieved an average performance of 52% accuracy using the LLM strategy across HotpotQA, MultiHop RAG and SQuaD, rising to 58% with RAG, and 66% with WiM.\"}", "{\"comment\": \"Before we consider adding significance tests to the paper, could you please explain why these stats are necessary and how they will improve our paper (and your review)?\\n\\nOur experimental design is based on the popular RULER and MultiHopRAG benchmarking frameworks, which carried out similar evaluations and did not require significance tests, so we would like to better understand the value these tests will add.\", \"refs\": \"\", \"multihoprag\": \"https://openreview.net/forum?id=t4eB3zYWBK#discussion\", \"ruler\": \"https://openreview.net/forum?id=kIoBbc76Sy#discussion\"}", "{\"summary\": \"The authors propose and investigate the usage of intermediate information (margins) for improving long-context retrieval. They compare different small and medium-size LLMs as well as a RAG like system and find improvements over these baselines in many cases.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"interesting and original idea\", \"comparison with several base lines\", \"improvements over these baselines\"], \"weaknesses\": [\"comparison and discussion not complete, as larger models (which show less improvements) and more sophisticated RAG systems are not included\"], \"questions\": \"1. Larger models seem to profit less from WiM (table 4), and you do not include models larger than 70B. Would models larger than 70B still see improvements with WiM? Can you discuss this in more detail?\\n2. RAG is best with SQuAD in many cases, and almost always better than WiM. You argue that with multihop Q&A this is no longer the case (as shown in table 4), but isn't this only true for your RAG implementation / approximation, and more sophisticated RAG systems would improve this score?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new inference methodology called \\\"writing in margins\\\" for long context tasks. The method builds upon the chunked prefill strategy (commonly used while dealing with long contexts to avoid the quadratic growth of memory), dividing long input contexts into manageable segments and generates \\\"margins\\\" or intermediate summaries for each chunk.\\nThe margins are then classified by the same LLM as useful or not-useful and useful margins are kept as part of the context and used during decoding step. \\nThe approach seems to significantly help LLM (especially smaller LLMs) in better accuracy during decoding.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper provides a number of thought provoking outcomes.\\n1) It showcases how adding a simple strategy of adding notes or summaries in the \\\"margins\\\" after each prefilled chunk can assist in improving LLM reasoning and retrieval capabilities. \\n2) The notes written by the LLM can potentially be used to improve explainability of the final decoded output. This is dependent on whether the question asked for the margin generation is useful. In the paper the authors ask the LLM whether the context is relevant to the query (and to provide a summary).\\n3) The approach is general purpose, it can be applied to any LLM without the need for finetuning which is a big win. \\n\\nOverall, strong contribution.\", \"weaknesses\": \"1) Latency - while the authors mention that latency is slightly increased, an ablation study for this would be welcome. Since the paper uses 2 steps for each chunk - margin generation and then margin classification, you are effectively doing 2 decoding steps for the model with each chunk. This will add latency, especially if the summaries generated are long.\\n\\n2) comparison against finetuned models - the paper mentions that this technique the models to perform well on tasks (long context) without the need to finetune the model (similar to rag). It would be good to include a model finetuned for the task and using the standard Long Context LLM decoding approach.\", \"questions\": \"1) One approach the authors could explore would be to use a separate smaller LLM as classifier. Using the base model (which can be very large) adds latency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your time in reviewing our paper. To answer your comments and questions:\\n\\nSince we only had access to a single 8xH100 node, we were unfortunately unable to evaluate the largest models that supported 128k context windows (e.g. Llama 405B). Instead, we focused on a wide range of top models (of different sizes) according to the LMSYS Chatbot Arena in order to test WiM in a number of different scenarios. Our largest model was actually 72B parameters (Qwen2-72B-Instruct, penultimate row in Table 4) and showed similar improvements to smaller models. \\n\\nAs for RAG, one of the reasons RAG is best with SQuAD is because WiM tends to produce more verbose output than RAG, and this is at odds with the short reference answers in SQuaD (Line 362). We also acknowledge that our RAG system is already somewhat idealised (Line 285) in that segments are selected based on a LLM classifier rather than, e.g., cosine similarity between the segment and the query, which means RAG scores may be inflated. Regardless, our intention with WiM is not to improve RAG, but to introduce a new inference pattern to improve long context inference in general.\"}", "{\"comment\": \"Thank you for your time in reviewing our paper. To answer your comments/questions:\\n\\nOn latency, we actually only had one extra decoding step by combining the margin generation and classification steps; i.e. if the first generated token was \\\"NO\\\", we skip margin generation (this also bypasses the need for a smaller LLM classifier). We discussed this in Appendix C1, but you're right that this information is important, so we will bring it into the main paper and discuss further.\\n\\nAs for finetuning, we agree it would be informative to evaluate a finetuned model in relation to the others, but we already had a lot of content and wanted to focus on off-the-shelf models.\"}", "{\"summary\": \"The authors present a method for improving the representation of chunked text in a prompt by computing query-specific representations (margin notes) for each chunk. They hypothesize that this expanded and query-specific text allows for more efficient and effective decoding. To test this, the authors apply their method to several baseline models across three tasks: multi-hop reasoning, single-hop retrieval, and aggregation. Post-hoc analysis involves an ablation study.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"**Interesting approach to query-specific representation expansion.** Bootstrapping decision-making with model information (e.g., writing the margins) is a compelling way for a model to guide itself toward a better response.\", \"**Focus on effectiveness and efficiency.** The authors discuss both the effectiveness of their method and how it can improve the efficiency during decoding.\", \"**Extensive experimentation.** Notwithstanding concerns (below), testing the approach on multiple settings and across multiple models is a rigorous way to test a model. The authors could have improved the discussion on how performance varies and what that implies about the proposed method.\"], \"weaknesses\": [\"**No formal statement of hypotheses.** This is perhaps implicit, but given the number of experiments, it is essential to be explicit about the precise hypotheses the experiments test. As best I can tell, one hypothesis is that treatment with margin notes will be better than treatment with other methods (LLM and RAG baselines) across a fixed condition (e.g., length variant, task). There are some allusions to other hypotheses (e.g., comparisons across columns), but that's less clear. This is important because of the next point.\", \"**No formal hypothesis tests.** There are a lot of numbers in Table 4+. Results in bold seem to be the max within some context. However, it's not clear if any of these differences are (a) statistically significant and/or (b) if those tests have accounted for multiple comparisons (since these datasets are being reused...a lot). Without this, it's difficult to understand the robustness of these results. In order to address this, you can consult the literature on significance testing (Cohen's \\\"Empirical Methods for Artificial Intelligence\\\" is good; tutorials from the RecSys/information retrieval communities are also good) and correcting for multiple comparisons (see those tutorials from the RecSys/information retrieval communities).\", \"**Writing falls off at the end.** Starting with the ablation experiments (Section 5), the flow and writing of the paper weaken. Why do these ablation experiments make sense? What are the implications? What is the argument of Section 6? How are all of these things connected to the core hypothesis of the paper?\"], \"questions\": [\"The main results in Table 4 present many metric values repeatedly measured using a fixed dataset and multiple algorithms. No statistical significance tests are shown. This severely compromises the integrity of the results. Were these tests conducted\\u2014with appropriate corrections for multiple comparisons\\u2014but not reported?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a new inference pattern called \\\"Writing in the Margins\\\" (WiM) that addresses the challenges of processing long input contexts in retrieval-oriented tasks. WiM leverages the chunked prefill mechanism in large language models to generate intermediate \\\"margin notes\\\" that summarize relevant information for the given query. These margin notes are then incorporated into the final response, leading to significant performance boosts on benchmarks like HotpotQA and Common Words Extraction compared to vanilla long-context models and retrieval-augmented approaches. The paper also discusses how WiM can enhance the transparency and interactivity of the retrieval process by providing users with real-time insights into the model's reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors introduce a novel inference pattern called \\\"Writing in the Margins\\\" (WiM) that leverages the chunked prefill mechanism in large language models to generate intermediate \\\"margin notes\\\" that can guide the final prediction. This is a clever way to address the challenges of long-context processing in retrieval-oriented tasks.\", \"The results show that WiM can significantly boost the performance of off-the-shelf models across a range of long-context benchmarks, including multi-hop reasoning and aggregation. This demonstrates the effectiveness of the proposed approach.\"], \"weaknesses\": [\"The experimental setup could be expanded to include more baselines, such as state-of-the-art models specifically designed for long-context processing to better assess the relative performance of WiM.\", \"While the results are strong, the paper could benefit from a deeper analysis of why WiM works well for some tasks (e.g., multi-hop, aggregation) but not as consistently for others (e.g., single-hop QA). Understanding the underlying mechanisms behind these performance differences would strengthen the contributions.\"], \"questions\": \"Please refer to the \\\"Weaknesses\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
56Zn3halhq
Learning Augmentation Policies from A Model Zoo for Time Series Forecasting
[ "Haochen Yuan", "Xuelin Li", "Yunbo Wang", "Xiaokang Yang" ]
Time series forecasting models typically rely on a fixed-size training set and treat all data uniformly, which may not effectively capture the specific patterns present in more challenging training samples. To address this issue, we introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning. Our approach begins with an empirical analysis to determine which parts of the training data should be augmented. Specifically, we identify the so-called marginal samples by considering the prediction diversity across a set of pretrained forecasting models. Next, we propose using variational masked autoencoders as the augmentation model and applying the REINFORCE algorithm to transform the marginal samples into new data. The goal of this generative model is not only to mimic the distribution of real data but also to reduce the variance of prediction errors across the model zoo. By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance, advancing the prior art in this field with minimal additional computational cost.
[ "Time Series Forecasting", "Data Augmentation" ]
https://openreview.net/pdf?id=56Zn3halhq
https://openreview.net/forum?id=56Zn3halhq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rktRGziC4b", "XsuMyaQsTa", "T0jAr9H1TQ", "CkncRnlcCT", "5ZeOS7t8s5" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730684498111, 1731469152226, 1730708446929, 1730655599055, 1730719177507 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission263/Reviewer_CgNu" ], [ "ICLR.cc/2025/Conference/Submission263/Authors" ], [ "ICLR.cc/2025/Conference/Submission263/Reviewer_kFJn" ], [ "ICLR.cc/2025/Conference/Submission263/Reviewer_Ec4E" ], [ "ICLR.cc/2025/Conference/Submission263/Reviewer_TrPt" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents AutoTSAug, a novel data augmentation method for time series forecasting that uses reinforcement learning to learn optimal augmentation policies. The key innovations are: (1) using a \\u2018\\u2018model zoo\\u2019\\u2019 of pretrained forecasting models to identify \\u2018\\u2018marginal samples\\u2019\\u2019 that would benefit most from augmentation, and (2) employing a variational masked autoencoder trained with REINFORCE to generate augmented data that reduces prediction variance across the model zoo. The method shows consistent improvements over baselines across multiple datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The use of model zoo diversity to identify samples for augmentation.\", \"Interesting combination of VAE and RL.\", \"Comprehensive empirical validation across multiple datasets.\", \"Thorough ablation studies supporting design choices.\", \"Practically useful with reasonable computational overhead.\"], \"weaknesses\": [\"Limited theoretical justification for focusing exclusively on high-variance samples. Could you provide a more formal theoretical justification of this claim? Or empirically prove that augmenting low variance samples (hard ones) is not beneficial?\", \"Potential oversight of valuable transformations for low-variance samples. Could you apply your augmentation framework to augment hard samples, with a different reward function that would improve the forecasting results on these samples?\", \"Over-reliance on model zoo\\u2019s diversity criterion without stability analysis.\", \"Risk of generating uniformly poor samples due to variance-based reward. Are there any safeguards to prevent generating uniformly poor samples (e.g. variance >> generation error)?\", \"Limited comparison with state-of-the-art augmentation methods.\", \"Missing analysis of hard samples that consistently perform poorly.\", \"Insufficient justification for choosing REINFORCE over other RL algorithms.\"], \"questions\": \"1) How sensitive is the method to the choice of models in the zoo? What criteria should be used for model selection?\\n2) Why not consider augmenting \\u2018\\u2018hard samples\\u2019\\u2019 that consistently perform poorly across the model zoo? Perhaps, augmenting these hard samples may help the models uncover their underlying patterns, and improve their performance?\\n3) How does the approach ensure that minimizing variance doesn\\u2019t lead to uniformly poor samples? For instance, if the variance is very high, and the agent gives more importance to the variance criterion.\\n4) What advantages does REINFORCE offer over alternative policy optimization methods like PPO or TRPO?\\n5) How does the method compare to approaches with theoretical guarantees like Recursive Time Series Data Augmentation (RIM)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work introduces a novel data augmentation method, AutoTSAug, that transforms high-diversity or marginal samples in time series data to better align with standard patterns, using reinforcement learning to guide the transformations. By leveraging a model zoo, the method identifies challenging samples with high prediction variance and applies a variational masked autoencoder to generate augmented, normalized versions of these samples. This approach aims to reduce prediction error variance and improve model stability by effectively normalizing outliers rather than removing them.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Normalizing challenging, high-diversity samples rather than removing them is a notable departure from traditional outlier detection and data cleaning practices. The use of reinforcement learning to guide this normalization process adds further novelty, as it enables the model to learn an optimal augmentation policy that reduces prediction variance across a model zoo.\", \"weaknesses\": \"A key weakness of the paper lies in its lack of engagement with the outlier detection and data cleaning literature, which limits the reader's ability to understand the contribution in context.\", \"questions\": \"I am not sure why there is no discussion of outlier and out-of-distribution detection or concepts introduced in the data cleaning literature. I think it is crucial to see the contribution and position of this work in those literatures and provide discussions for the proposed method against the approaches in those literatures.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a data augmentation method for time series prediction. The paper focuses on augmenting \\\"marginal samples\\\" which are samples that has high prediction variance across a variety of different prediction models. The paper then propose a generative approach using V-MAEs to augment the marginal samples and to train the generative model via a reinforcement learning approach.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes a interesting perspective on data augmentation by focusing entirely on high prediction variance samples.\", \"The paper is generally well written and the methods and results are presented well.\"], \"weaknesses\": [\"In the Related Works section, the paper presents other generative based and RL-based methods for time-series data augementation yet only compares their proposed method with the Gaussian noise augmentor. It would be nice to see more baseline comparisons (at least one each from generative based and RL-based methods).\", \"It is not super convincing that augmenting only marginal samples results in consistantly significant improvements for the prediction model, as most of the results presented in Table 2 having a <5% improvement with the augmented data, with the biggest improvement coming from the basic Transformer model.\", \"Moreover, it is not entirely clear that AutoTSAug is able to consistantly morph the marginal samples into samples that exhibit lower prediction variance in the model zoo.\"], \"questions\": [\"Are the marginal samples consistant across different training instances of the same model? E.g. if the model zoo is initialized with different parameters or trained with different hyperparameters does it affect which samples are considered marginal?\", \"Since the paper proposes an RL based training approach, does the proposed method use a multi-step training approach where the initial recontruction is fed into the encoder as the new \\\"state\\\" and policy model would then further augment the sample. Or do the proposed method use a single step approach and if so is that enough to significantly modify the samples towards the reward function?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel data augmentation method for time series forecasting. More specifically, the proposed method\\nleverages a model zoo of pretrained forecasting models to identify the so called\\\"marginal samples\\\" - training instances where models show a high prediction diversity. Focusing augmentation on these marginal samples is more effective than uniform augmentation across all data. To learn the augmentation policy the method uses a variational masked autoencoder (V-MAE) as the base augmentation model. They applies REINFORCE algorithm to optimise the augmentation policy using model zoo prediction variance as feedback. The goal is to generate augmented data that reduces prediction variance across models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Presents novel ideas for data augmentation. Rather than treating all data equally for augmentation, introduces the concept of identifying \\\"marginal samples\\\" that would benefit most from augmentation.\\nA comprehensive empirical validation across multiple datasets and models and a thorough ablation studies examining key components is presented.\\nOverall well-structured presentation progressing from motivation to implementation.\\nThe design choices are well motivated.\", \"weaknesses\": \"My main concern with this paper is that modest gains in results don't seem to justify the expensive and complicated method proposed. The requirement of multiple pre trained models itself is quite expensive. The training of the augmentation policy seems quite compute intensive. The performance gains are marginal and are primarily driven by the base transformer. On modern transformer based forecasting methods such as such as patchtst and itransformer the gains are marginal and it even underperforms in some cases. Overall, I think the performance gains don't justify the computation expenses required.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
562B7aLi5X
Binary Losses for Density Ratio Estimation
[ "Werner Zellinger" ]
Estimating the ratio of two probability densities from a finite number of observations is a central machine learning problem. A common approach is to construct estimators using binary classifiers that distinguish observations from the two densities. However, the accuracy of these estimators depends on the choice of the binary loss function, raising the question of which loss function to choose based on desired error properties. For example, traditional loss functions, such as logistic or boosting loss, prioritize accurate estimation of small density ratio values over large ones, even though the latter are more critical in many applications. In this work, we start with prescribed error measures in a class of Bregman divergences and characterize all loss functions that result in density ratio estimators with small error. Our characterization extends results on composite binary losses from Reid & Williamson (2010) and their connection to density ratio estimation as identified by Menon & Ong (2016). As a result, we obtain a simple recipe for constructing loss functions with certain properties, such as those that prioritize an accurate estimation of large density ratio values. Our novel loss functions outperform related approaches for resolving parameter choice issues of 11 deep domain adaptation algorithms in average performance across 484 real-world tasks including sensor signals, texts, and images.
[ "density ratio estimation", "domain adaptation", "composite binary losses", "class probability estimation" ]
Accept (Poster)
https://openreview.net/pdf?id=562B7aLi5X
https://openreview.net/forum?id=562B7aLi5X
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wHgdjJzJVd", "rs6wPrmyqg", "rQRerAwVlb", "nkibi5C9KZ", "n5l6DzlOLi", "lRRVap2gZ2", "l6IUwipRuJ", "eS08RMWU0r", "YZ9ICVmgiU", "X8BPkzKxhY", "ULlwEwgMIN", "RHQepzjvgZ", "OuTYn3nrk5", "Lv3hhPHWiW", "J1ErrIENOv", "B62ABTdjHp", "6nXnJUgDoj", "53m1qvPKee" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732532488491, 1732578811416, 1734359009192, 1732020753011, 1732020364340, 1730651691483, 1730149353664, 1732020687072, 1730671522378, 1730353317641, 1732019913089, 1737523652706, 1732562180366, 1730534624853, 1732020728459, 1732550928848, 1732020608833, 1732571361874 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_6SXE" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_Ps3k" ], [ "ICLR.cc/2025/Conference/Submission4639/Area_Chair_X7mn" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_1wrz" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_55K4" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_FZmy" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_Ps3k" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_55K4" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_6SXE" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_FZmy" ], [ "ICLR.cc/2025/Conference/Submission4639/Authors" ], [ "ICLR.cc/2025/Conference/Submission4639/Reviewer_1wrz" ] ], "structured_content_str": [ "{\"title\": \"Thank you\", \"comment\": \"Thank the authors for their effort and very detailed response to my comments. After going through the other reviewers' comments and the authors' responses, I choose to keep my positive rates.\"}", "{\"title\": \"Response to author\", \"comment\": \"Thank you for addressing my questions. I will maintain my current score.\"}", "{\"metareview\": \"The paper addresses an important problem, definitely in a proper way (no pun intended). In my decision to accept the paper, I have taken into account the initial opinions and their revised version, the responses of the authors, and the paper (I read it).\\n\\nIt is important to acknowledge that the paper adopts a first principle approach to the DR problem. I would have loved the paper to go all the way forward, up to deriving in a more formal way the content of Section 5, but this part is definitely a tricky bit (this is not to say that there is no principled / general way to address it). At least, the problem of tackling large DRs is approached in a right way, from the standpoint of the design of the loss function and it is an interesting work that tries to alleviate the problem of estimating large DRs by pointing at specific losses using this theory. I commend the authors not just for their result, but also for the non-trivial path adopted. Reviewers (in particular 1wrz) had a legitimate question on the significance of the material brought to the paper, but I think the authors have made a decent explanation of their contribution. Personally, I do not just consider the result but also the *path* chosen by the authors to addressing their problem as a positive contribution which deserves publication. I can only encourage the authors to make this even more clear in the camera ready version.\\n\\nThe authors may remark that addressing the sample complexity of their method may take advantage of the property of Bregman divergences that authorizes to link DR and class probability estimation (a reduction may also be useful to tackle this additional problem). This property is in fact a specific case of a property on the perspective transform of Bregman divergences. For a general treatment and further use, see \\\"A scaled Bregman Theorem with Applications\\\" by Nock, Menon and Ong, NeurIPS'16.\", \"additional_comments_on_reviewer_discussion\": \"I particularly appreciated the attention to details of the authors in answering technical questions (e.g. 1wrz).\"}", "{\"title\": \"Answer\", \"comment\": \"We thank you for your comments which help to improve our work and appreciate all your positive comments!\\nAll mentioned minor issues are clarified in the new manuscript.\", \"regarding_your_question\": \"yes, we refer to the concevrgence of the empirical risk minimizer to the true density ratio in terms of some suitable error term as, e.g., the Bregman divergence. Sample complexity bounds of [1] don't hold, as our novel loss functions are not self-concordance.\\n\\n[1] Zellinger, Werner, Stefan Kindermann, and Sergei V. Pereverzyev. \\\"Adaptive learning of density ratios in RKHS.\\\" The Journal of Machine Learning Research 24.1 (2023): 18863-18891.\\n\\nThank you again for your time and effort used to review our manuscript.\"}", "{\"title\": \"Answer\", \"comment\": \"Thank you very much for your comments which help to improve our work! Our answers are as follows:\\n- While we agree with the essential structure of your proof, we disagree with your points 1 and 3. One counterexample for your point 3 (stating $\\\\underline{L}(\\\\eta)=-\\\\phi(\\\\eta)$) is KuLSIF (Example 1) with\\n $$\\n \\\\ell_1(y):=-y, \\\\ell_{-1}(y):=\\\\frac{y^2}{2},\\\\Psi^{-1}(y):=\\\\frac{y}{1+y}\\n $$\\n such that\\n $$\\n \\\\underline{L}(\\\\eta)=\\\\eta\\\\ell_1(\\\\Psi(\\\\eta))+(1-\\\\eta)\\\\ell_{-1}(\\\\Psi(\\\\eta))= -\\\\frac{\\\\eta^2}{1+\\\\eta}+\\\\frac{1}{1+\\\\eta}\\\\frac{\\\\eta^2}{2}=\\\\frac{-\\\\eta^2}{2+2\\\\eta}\\n $$\\n and, by [3, Proposition 3],\\n $$\\n -\\\\phi(\\\\eta)=(1+\\\\eta) \\\\underline{L}(\\\\frac{\\\\eta}{1+\\\\eta})=-\\\\frac{\\\\eta^2}{2}\\n $$\\n which are different for $\\\\underline{L}(\\\\frac{1}{2})=\\\\frac{1}{4}\\\\neq\\\\frac{1}{8}=-\\\\phi(\\\\frac{1}{2})$. \\n Instead of point 3, Lemma 5 can be used (updated version).\\n\\n Furthermore, concerning your point 1, we note that Theorem 4 in [2] proves equality of Bregman divergences for equal (up to affine terms) generators. However, we need equality (up to affine terms) of generators when we know equality of *expected* Bregman generator functions, i.e., the other direction.\\n Instead of your point 1, our Lemma 4 (updated version) can be used, which proves the equality of generators by constructing two witness probability measures in a non-trivial two-page proof. \\n\\n To improve our presentation, we extended the title, abstract, the introduction, Remark 1 and we split Lemma 4 (submitted version) in two Lemmas (4 and 5 in updated version), see the green text in updated submission document. \\n- We disagree that the canonical loss in [1, Section 6.1] is the same as the loss derived from $g_\\\\mathrm{can}$.\\n First, from the same KuLSIF example as used above, we see that the last equality of your derivation does not hold, since $-\\\\underline{L}''(\\\\eta)\\\\neq \\\\phi''(\\\\eta)$, e.g., for $\\\\eta=\\\\frac{1}{2}$.\\n Second, the equation $\\\\Psi'(c)=w(c)$ in [1, Section 6.1] leads to a loss function that differs from the one derived by Eq.(10) by a non-trivial factor of $(1+c)^3$, see Remark 3 (updated version).\\n- The loss functions follow directly from Section 4 by plugging $\\\\phi$ and $g_\\\\mathrm{can}$ in Eq.(8). We added a remark and uploaded a simple Mathematica notebook for algebraic correctness.\\n- We prove that our novel loss functions assign larger weight to large values of density ratio values than to smaller ones; which is in contrast to related approaches (see [3]).\\n Moreover, we illustrate this behavior in numerical examples (Figure 1, Figure 2, Figure 3).\\n Finally, we show that our losses outperform others in extensive state-of-the-art benchmark experiments.\\n\\nWe thank you again for your time and effort used to review our work.\\nIf you are satisfied with our answers and extensions in the updated manuscript, we kindly ask you to take this into account in your final decision.\"}", "{\"summary\": \"The authors present theoretical results characterizing strictly proper binary loss functions that lead to minimizers of Bregman divergences in probability density estimation. According to these theoretical results, they propose a novel loss function that prioritizes accurate estimation of large density ratio values over smaller ones. They also empirically validate the effectiveness of their proposed loss function through numerical experiments, demonstrating that the novel loss function can lead to improvements in parameter selection for domain adaptation tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"References to existing research are sufficiently provided.\", \"weaknesses\": \"#### Major Weaknesses:\\n1. There are concerns regarding the novelty of the theoretical results presented in this study. Specifically, results such as the necessity of Equation (8) in Theorem 1 appear to be easily derived from findings in prior work referenced by this study ([1], [2], and [3]). A detailed examination of this issue is provided below.\\n2. Additionally, the canonical form of the density ratio link, given in Equation (10), does not constitute a new result, as it can be derived from results presented in prior studies ([1]). A detailed examination of this issue is provided below.\\n3. The proposed loss function, derived from Equation (11) in Section 5, lacks a clear connection to the theoretical results previously established in Section 4.\\n4. Moreover, it is unclear how the proposed loss function specifically addresses shortcomings associated with existing loss functions. The authors are encouraged to include mathematical analysis, such as theorems, to clarify the properties of the proposed loss function.\\n\\n#### Minor Weakness:\\n5. Figures 2 and 3:\\n - The axis titles are missing, making it difficult to interpret the graphs.\\n - In particular, Figure 3 lacks explanatory labels for each axis, which are needed to understand these experimental results.\\n\\n\\n---\\nHereafter, details of the major weaknesses, specifically Weaknesses 1 and 2, are discussed.\\n\\n#### About major weakness 1:\\nEquation (8) can be derived as follows:\\n1. From Theorem 4 in [2], we know that $B_{\\\\phi} = B_{\\\\phi'}$ for any $\\\\phi'$ with $\\\\phi'(y) = \\\\phi(y) + c_2 y + c_1$, where $c_2$ and $c_1$ are constants. This fact implies that terms such as $\\\\hat{\\\\eta} c_2 $ and $c_1$ in the definition of $\\\\gamma(\\\\cdot)$ (line 236) are redundant.\\n2. From Theorem 4 in [1], we know that $L(\\\\eta, \\\\mu) = \\\\underline{L}(z) + (\\\\eta - \\\\mu) \\\\underline{L}'(\\\\mu)$.\\n3. Additionally, we have $\\\\underline{L}(\\\\eta) = - \\\\phi(\\\\eta)$ because $\\\\underline{L} = L(\\\\eta, \\\\eta) = \\\\eta l_1(\\\\eta) + (1 - \\\\eta) l_2(\\\\eta) = \\\\gamma(\\\\eta) = - \\\\phi(\\\\eta)$.\\n4. Thus, $L(\\\\eta, \\\\Psi^{-1}(y)) = - \\\\phi(\\\\Psi^{-1}(y)) - (\\\\eta - \\\\Psi^{-1}(y)) \\\\phi' (\\\\Psi^{-1}(y))$, where Equation (8) represents the cases for $\\\\eta = 0$ and $\\\\eta = 1$ in this equation.\\n\\n#### About major weakness 2:\\nFrom Corollary 3 in [1] and the discussion in Section 6.1 of [1], it follows that $(g^{-1}_{can})' (c) = w(c) = - \\\\underline{L}''(c) = \\\\phi''(c)$. There appears to be no significant difference between Equation (10) and this equation.\\n\\n---\\n\\n[1] Reid, M. D., & Williamson, R. C. (2010). Composite binary losses. The Journal of Machine Learning Research, 11, 2387-2422.\\n\\n[2] Reid, M. D., & Williamson, R. C. (2011). Information, Divergence and Risk for Binary Experiments. Journal of Machine Learning Research, 12(3).\\n\\n[3] Menon, A., & Ong, C. S. (2016, June). Linking losses for density ratio and class-probability estimation. In International Conference on Machine Learning (pp. 304-313). PMLR.\", \"questions\": [\"Considering major weaknesses 1 and 2 discussed above, could you provide more additional discussions to clarify the novel contributions of your study?\", \"Considering major weaknesses 3 and 4 discussed above, could you provide further detailed information to elucidate the effectiveness of your approach discussed in Section 5?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"A standard technique to estimate the ratio between densities of two probability distributions $P$ and $Q$ from samples from each of the distributions is to train a binary classifier that distinguishes the samples. Namely, one labels samples from $P$ with the label $1$, and samples from $Q$ with the label $-1$, and empirically minimizes a loss function on these samples. Depending on the loss function that is chosen, it is known that the as the number of samples tends to infinity, the empirical minimizer effectively minimizes a certain Bregman divergence between the true density ratio and the estimated density ratio.\\n\\nHowever, the convex potential of this Bregman divergence depends on the chosen loss function. Namely, the Bregman divergence to the true density ratio that our estimator ends up minimizing depends heavily on the loss function that we chose. As it turns out, most commonly used loss functions (like the logistic loss), correspond to Bregman divergences that do not appropriately penalize discrepancies at large density ratio values, instead penalizing density ratio errors at smaller values---this might lead to suboptimal density ratio estimates in many applications.\", \"this_paper_takes_an_inverted_approach\": \"given a convex potential $\\\\phi$ (and a ``probability link function'' $g$), the paper characterizes a unique loss function $l_{\\\\phi, g}$, such that upon empirically minimizing $l_{\\\\phi, g}$, as the number of samples grows, the Bregman divergence with potential $\\\\phi$ is minimized. The paper furthermore proposes a canonical link function $g$, which induces a convex loss function $l$, and is hence computationally amenable to empirical risk minimization.\\n\\nOne convenient application of the characterization in this work is that it gives a way for a practitioner to design a loss function that would have the properties they desire. For example, as elaborated in Section 5 by the authors, if one cares about penalizing errors in larger density ratios more than errors in smaller density ratios, one can specify a potential $\\\\phi$ for a Bregman divergence $D_\\\\phi$ that ensures this, and thereafter using the characterization in the paper, obtain the associated loss function $l_\\\\phi$. If one now minimizes $l_\\\\phi$ on the samples, then in the limit, one would be minimizing $D_\\\\phi$ (which was chosen so as to prioritize accurate estimation of large density ratios). In particular, the authors specifically propose two convex potentials $\\\\phi$ for this purpose--the Exponential Weight (EW) function and polynomial weight functions, and derive the associated loss functions from their characterization.\\n\\nFinally, the authors empirically validate minimizing these loss functions as compared to the standard loss functions on a variety of synthetic as well as real-world datasets. They show that minimizing their loss functions leads to better performance on importance weighting tasks on a range of datasets. The experimental evaluation appears quite thorough and extensive.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The characterization provided by the authors significantly completes the picture laid out in prior work by Menon and Ong 2016, and work by Reid and Williamson (see Remark 1 for specifics). The motivation for considering the inverted problem is convincing---one can imagine ascribing certain desired properties of an estimator to the minimizer of a Bregman divergence, and thereafter, using the characterization derived in the paper, obtain the correct loss function to minimize on the data that realizes the minimization of this Bregman divergence (and hence has the desried property). The authors specifically consider the property \\\"small errors on large density ratios\\\", and obtain strong empirical results for minimizing the loss functions through their characterization. In my view, this is a valuable contribution that enhances our toolbox for estimating density ratios in a principled manner. The writing of the paper is also geneally good, although at some places, it becomes dense with a lot of assumed context.\", \"weaknesses\": \"Up until you mention your first contribution, the reader has only looked at the pseudocode of Algorithm 1, which uses a probability link function $\\\\Psi$. There has been no mention of $g$ as yet. Correct me if I am wrong, but my understanding is that $g(x)$ in Algorithm 1 is simply $\\\\Psi^{-1}(x)/1-\\\\Psi^{-1}(x)$---it would be helpful to at least mention this before introducing \\\"Density ratio link $g$\\\" in line 78. Because otherwise, the reader, who has just seen $\\\\Psi$ in Algorithm 1, is a little confused about where $g$ sprang out of nowhere, and how it is relevant.\\n\\n---\\n\\nMinor/typos: \\\\\", \"line_78\": \"I believe the loss function for an arbitrary $g$ is not convex, but only strictly roper composite (the loss function for the canonical $g$, as stated in the next sentence, is convex).\", \"line_273\": \"I believe in the denominator, there is a typo (should be $g_{can}$ instead of $g$)\", \"questions\": \"In the conclusion, you mention that the sample complexity of these tasks is not known. Do you mean to say that the empirical risk minimizer of the loss converges to the minimizer of the Bregman divergence only as the number of samples goes to infinity, but we do not know finite-sample error bounds for the empirical minimizer (similar to how in PAC learning theory, this would correspond to an additional \\\"complexity of F\\\"/#samples error term)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer\", \"comment\": \"We thank you for your time and effort used to review our work! our answers to your questions are as follows:\\n1. Indeed, the existence is not always guaranteed. We always assume that $P$ is absolutely continuous w.r.t.~$Q$, i.e.~$P\\\\ll Q$, which guarantees the existences by the Radon-Nikod\\\\'ym derivative. That is, disjoint supports are not allowed.\\n2. In principle, dividing a heavy tail by a light tail should increase the density ratio values in the tail and making it simpler to estimate with our approaches, see also Figure 1. However, we did not test this explicitly.\\n3. All problems in our real world experiments are higher dimensional, bag of words representations of texts (Amazon reviews), images (MiniDomainNet) and time series of body sensor signals (HHAR).\\n\\nWe thank you again for your time and effort used to review our work.\\nIf you are satisfied with our answers and extensions, we kindly ask you to take this into account in your final decision.\"}", "{\"summary\": \"The authors characterize the set of loss functions that, when used in density ratio estimation for binary classification, lead to the minimization of a particular Bregman divergence. This approach is motivated by the observation that some commonly used losses (such as exponential or logistic) yield density ratio estimates that minimize a similar Bregman divergence expression. After identifying these losses, they design a new family of losses aimed at accurately estimating large values of the density ratio, in contrast to standard losses that focus on estimating small ratio values. They apply these designed losses to estimate density ratios in Gaussian RKHS and for unsupervised domain adaptation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper appears to be self-contained and introduces all the tools and notation it uses.\\n\\n2. The experimental results are fairly extensive for a theoretical paper.\\n\\n3. The theoretical results, especially Lemma 4 in the appendix, seem to rely on non-trivial applications of several previously established results.\", \"weaknesses\": \"1. The presentation of the paper could be significantly improved. The paper uses heavy notation \\u2014 for example, understanding $B_{-\\\\underline{L}^\\\\circ}(\\\\beta,g\\\\circ f)$ requires substantial effort. Another clear example is Remark 1, which is completely incomprehensible unless the reader is already familiar with everything it covers.\\n\\n2. The novelty of this work is unclear. I believe the authors would agree that the main contribution of this paper is theoretical and primarily represented by Theorem 1. However, given Remark 1, it is not evident how substantial this theorem\\u2019s contribution really is.\\n\\n3. It is not clear why one should focus on minimizing equation (1). According to the beginning of Section 2, there are \\\"many\\\" density ratio estimation methods that lead to minimizing (1), with four examples provided. What does \\\"many\\\" mean in this context, and why should one limit themself to this specific type of minimizers?\\n\\n4. Some parts of the experimental results, like Section 6.2, contain too much irrelevant information for readers, making it easy to miss the main points. I would consider moving some of this information to the appendices.\", \"questions\": \"1. Could you briefly outline the most significant theoretical contribution of your paper and the challenges involved in achieving it?\\n\\n2. Could you explain the intuition behind the flatness of your method in Figure 2? If the method aims to estimate the ratio accurately for high values, shouldn\\u2019t it follow the top of the curve closely? Furthermore, in the lower row of the figure for $\\\\alpha=0.01$, I am not sure I understand why one would prefer your estimate over KuLSIF.\\n\\n3. Since you mention that standard methods prioritize estimating smaller values and you focus on higher values, what happens if one applies standard methods to the inverse ratio, i.e., estimating $dQ/dP$?\", \"typo\": \"In footnote 2, the next-to-last equation contains probabilities that should be conditioned on $x$: it should be $\\\\rho(y=1\\u2223x)\\\\rho(x)$ instead of $\\\\rho(x,y=1)\\\\rho(x)$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work examines the estimation of the Radon-Nikodym derivative, which is the ratio of two probability densities. In classical algorithms, an incorrect choice of binary loss function can lead to biased estimates. The author first derived the necessary properties for an appropriate loss function. Based on this analysis, novel loss functions were proposed, demonstrating improved parameter selection in both simulated and real data examples.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The theoretical analysis appears solid, and several real datasets are used to demonstrate the proposed loss function. The authors provided detailed comparison between new results and previous analysis.\", \"weaknesses\": \"Although the author discussed how their results improve upon previous work, they did not elaborate on how their proof differs from prior analyses. Consequently, the challenges of the proof, as well as the novelty and contributions of the theoretical analysis, remain unclear. Additionally, the writing could be improved; including more high-level explanations of the motivation and results would make it easier to follow.\", \"questions\": \"1. Theorem 1 extends previous results. What is the main challenge in extending these, and what is the key novelty in the proof?\\n2. Table 1 shows that EW consistently performs best for Amazon Reviews under Importance Weighted Aggregation. Is there any intuition behind this outcome?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Answer\", \"comment\": [\"We thank all reviewers for the invested time, your efforts and the constructive comments which help to improve our work! We especially appreciate:\", \"Consensus about correctness of all proofs and the effectiveness in empirical evaluations\", \"That our loss functions are (to the best of the authors and reviewers\\u2019 knowledge) the first losses for density ratio estimation that assign higher weight to larger values\", \"Explicit emphasis of 55K4, Ps3k, 6SXE, FZmy on the extensiveness of our empirical evaluations\", \"Reviewers' concerns are mainly about (a) the presentation of contributions and (b) the novelty beyond loss functions and SoTA performance. Our main answers are:\", \"Concerning (a): Our contribution is to provide a complete set of techniques for designing loss functions for Algorithm 1 that (a) prioritize the estimation of large density ratio values and (b) allow high performance in practice. Three key contributions are given:\", \"Characterization: The technical characterization (Theorem 1) of loss functions satisfying Eq. (1).\", \"Losses: The design of novel losses (Section~5) for increasing weight functions $\\\\phi''$.\", \"Experiments: State-fo-the-art performance in benchmark experiments involving 9174 neural networks, 484 parameter selection tasks and three datasets for text, images and human body sensor signals.\", \"We added several comments around the manuscript to clarify the main contributions.\", \"Concerning (b): The reviewers agree (a) on the novelty of the loss functions in Section 5 and (b) its state-of-the-art improvements on extensive empirical evaluations. Our third novelty is the (c) necessity of the loss function in Eq. (8).\"], \"our_proof_has_four_non_trivial_components\": \"(a) the inversion of the constructions developed in [Menon\\\\&Ong, Appendix B] done in Lemma 5 (updated version), (b) the application of several results from [Reid\\\\&Williamson], (c) applying Savage's theorem and (d) our technical Lemma 4 (updated version) and it's associated non-trivial two-page proof.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to author comment\", \"comment\": \"Thank you for the clarification. I maintain that the contributions in this paper are valuable, and hence maintain my positive score\"}", "{\"summary\": \"The paper addresses the challenge of estimating the ratio of two probability densities from observations. Typically, this is done using binary classifiers, but the efficacy of the estimators is significantly affected by the choice of the binary loss function used. The authors characterize loss functions that result in statistically favorable density ratio estimations, particularly focusing on achieving low errors in large density ratio values\\u2014a departure from classical approaches that perform well on small values. They introduce novel loss functions and demonstrate their application in parameter selection for deep domain adaptation tasks. Numerical experiments and real-world applications illustrate the practical benefits of these loss functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Compared to the related literature, this paper introduces a new framework for constructing novel loss functions, prioritizing an accurate estimation of large density ratio values over smaller ones.\\n2. It provides a thorough mathematical foundation, characterizing the types of loss functions that align with specific error measures derived from Bregman divergences. The comparison with the related literature is good.\\n3. The work shows large practical implementation through empirical data and real application in deep domain adaptation. The simulation work is extensive to demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. The paper does not delve into the sample complexity of the proposed methods, which could be critical for understanding their efficiency in various scenarios.\\n2. While it improves estimation for large density values, the impact on performance for smaller values isn't thoroughly explored.\\n3. A more detailed introduction of the experiments should be considered.\", \"questions\": \"1. Is the existence of the density ratio always guaranteed? For example, if the two densities have no overlapping support, in which case, the definition seems to fail. Can the proposed method perform a good estimation?\\n2. If one distribution has a light tail and the other a heavy tail, how would that impact the estimation?\\n3. It seems the author considered only a one-dimensional case in the experiment, how about a multi-dimensional case when $d>0$? This is more common in covariate shift problems.\\n4. What are the biggest difficulties and challenges for deriving the the sample complexity of the proposed methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No details of concerns beyond the above.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer\", \"comment\": \"We thank you for your comments which help to improve our work! Our answers are:\\n1. There were two main challenges.\\n The first challenge was to invert the constructions in [Menon\\\\&Ong, Appendix B] and to non-trivially combine it with several results from [Reid\\\\&Williamson], see the proof of Lemma 5 (updated version). The second challenge was to prove that equal expected Bregman divergences result in equal (up to affine terms) generators, see Lemma~4 (updated version).\\n\\n The key novelty consists in the necessity of Eq.(8), which was (to the best of the authors' and all other reviewers knowledge) not explicitly stated before.\\n2. Our intuition is, that Amazon Reviews has more peaky density ratios. But, we are far away from a proof of this intuition.\\n\\nWe thank you again for your time and effort used to review our work.\\nIf you are satisfied with our answers and extensions, we kindly ask you to take this into account in your final decision.\"}", "{\"comment\": \"Thank you for your response. I acknowledge that the updated version addresses some of my concerns, and I have adjusted my score accordingly.\"}", "{\"title\": \"Answer\", \"comment\": \"We thank you for your comments which help to improve our work! We especially appreciate your notes on extensiveness of experiments and non-triviality of theoretical results.\", \"our_answers_are_as_follows\": [\"We agree that the theory requires some effort to learn. To clarify the confusions, we added several remarks in the updated paper to clarify the main points.\", \"Our contribution is to *provide a complete set of techniques for designing loss functions for Algorithm 1 that (a) prioritize the estimation of large density ratio values and (b) allow high performance in practice.* This requires three main contributions:\", \"Characterization: The technical characterization (Theorem 1) of loss functions satisfying Eq. (1).\", \"Losses: The design of novel losses (Section 5) for increasing weight functions $\\\\phi''$.\", \"Experiments: State-fo-the-art performance in benchmark experiments.\", \"Eq.(1) is the error of any estimator computed by Algorithm 1. The motivation for using Algorithm 1 is the same as the motivation for using generative adversarial networks or classifier-based statistical tests: the relation between discriminative and generative approaches.\", \"We completely agree with the amount of information. However, trading this off with reproducibility of experiments and the wish of other reviewers to extend this section, we decided to leave this Section as it is.\", \"Minor typo: Sorry, we are confused; what difference between the two do you want to highlight?\", \"We thank you for your detailed questions which especially help to improve our presentation. Our answers are:\", \"1. The main contribtions are stated above. The corresponding challenges were:\", \"\\\\begin{itemize}\", \"Characterization: There were two main challenges.\", \"The first challenge was to invert the constructions in [Menon\\\\&Ong, Appendix B] and to non-trivially combine it with several results from [Reid\\\\&Williamson], see the proof of Lemma 5 (updated version). The second challenge was to prove that equal expected Bregman divergences result in equal (up to affine terms) generators, see Lemma 4 (updated version) and it's associated non-trivial two-page proof.\", \"Losses: The main contribution was to find suitable increasing weight functions which can be efficiently optimize and clearly lead to better weightings than related approaches in our numerical examples.\", \"Experiments: The main challenge was to train 9174 neural networks with 11 different deep learning algorithms on 484 parameter selection tasks.\", \"2. Figure 2 shows the effect of regularization using the experiment of [Zellinger et al., 2023, JMLR]. The Figure is not aimed for benchmarking, but rather visualizes the effect of better prediction larger values (flatness). It shows the consequently worse approximations for smaller values.\", \"3. This is a good idea which, however, requires the inverse ratio to exist.\", \"We thank you again for your time and effort used to review our work.\", \"If you are satisfied with our answers and extensions, we kindly ask you to take this into account in your final decision.\"]}", "{\"comment\": \"Thank you for your response. I acknowledge that the updated version addresses some of my concerns, and I have adjusted my score accordingly.\"}" ] }
55pCDKiS8B
Elucidating the Preconditioning in Consistency Distillation
[ "Kaiwen Zheng", "Guande He", "Jianfei Chen", "Fan Bao", "Jun Zhu" ]
Consistency distillation is a prevalent way for accelerating diffusion models adopted in consistency (trajectory) models, in which a student model is trained to traverse backward on the probability flow (PF) ordinary differential equation (ODE) trajectory determined by the teacher model. Preconditioning is a vital technique for stabilizing consistency distillation, by linear combining the input data and the network output with pre-defined coefficients as the consistency function. It imposes the boundary condition of consistency functions without restricting the form and expressiveness of the neural network. However, previous preconditionings are hand-crafted and may be suboptimal choices. In this work, we offer the first theoretical insights into the preconditioning in consistency distillation, by elucidating its design criteria and the connection to the teacher ODE trajectory. Based on these analyses, we further propose a principled way dubbed \textit{Analytic-Precond} to analytically optimize the preconditioning according to the consistency gap (defined as the gap between the teacher denoiser and the optimal student denoiser) on a generalized teacher ODE. We demonstrate that Analytic-Precond can facilitate the learning of trajectory jumpers, enhance the alignment of the student trajectory with the teacher's, and achieve $2\times$ to $3\times$ training acceleration of consistency trajectory models in multi-step generation across various datasets.
[ "Diffusion Models", "Distillation", "Consistency Trajectory Models" ]
Accept (Poster)
https://openreview.net/pdf?id=55pCDKiS8B
https://openreview.net/forum?id=55pCDKiS8B
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zMDgY0SZNQ", "rONGW0EWbL", "oNUnR9UDhj", "l3jHvtEbrS", "ibyOU3BRq5", "flqJRfqhrB", "dvyOaBMRaP", "drjPIHVlgb", "VXbmZ4p8Wq", "U6Qwa5LCyO", "SqD0h0NcXW", "MPkv5C82O1", "HAVA9CHwN2", "Gb38YU9umQ", "BbOCvlVlqs", "7EdcEql2EM", "3kMGfXWlRr", "1V55cZTqpg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732591928817, 1732611986181, 1732327993469, 1737523567537, 1732794033264, 1732613574399, 1730768151393, 1732391272332, 1730894293828, 1732327948155, 1730715566685, 1732328020530, 1732659918192, 1730356562477, 1732327975179, 1732800608456, 1734616884127, 1732702266787 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_TK51" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_ae52" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_yEcg" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_6jeK" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_6jeK" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_ae52" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_yEcg" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Reviewer_TK51" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ], [ "ICLR.cc/2025/Conference/Submission3286/Area_Chair_FdrL" ], [ "ICLR.cc/2025/Conference/Submission3286/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your responses. I keep my acceptance score.\\n\\nJust one comment regarding \\\"We believe consistency distillation is a more efficient and promising approach, as consistency training from scratch typically requires more iterations and results in sub-par performance.\\\" I agree with the performance gap. Consistency training has some benefits though, specially for very large models where even keeping two copies of the model may be problematic. Though I don't think not dealing with this problem takes merit away from the paper; distillation is also a very important topic by itself.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful feedback and for keeping your acceptance score. We agree that consistency training can have benefits in certain scenarios and will leave improvements on them for further research.\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our work! We appreciate your thoughtful and positive feedback on our work. We hope the responses below address your concerns.\\n\\n*Q1: The paper focuses on the case where $f=0, g=\\\\sqrt{2t}$ because of the recent literature. Is the method still applicable for other choices of $f$ and $g$?*\\n\\n**A**: Sure! A more convenient way is to represent the forward process defined by the SDE as $q(x_t|x_0)=\\\\mathcal{N}(\\\\alpha_tx_0,\\\\sigma_t^2I)$. The coefficients $\\\\alpha_t,\\\\sigma_t$ are determined by $f(t),g(t)$ in the forward SDE and satisfy $f(t)=\\\\frac{d\\\\log \\\\alpha_t}{d t}$, $g^2(t)=\\\\frac{d \\\\sigma_t^2}{d t}-2\\\\frac{d\\\\log \\\\alpha_t}{d t}\\\\sigma_t^2$ [1]. The case $f=0, g=\\\\sqrt{2t}$ is adopted in the recent literature as it corresponds to $\\\\alpha_t=1,\\\\sigma_t=t$ which is quite simple, and the corresponding diffusion ODE is $\\\\frac{dx_t}{dt}=\\\\frac{x_t-\\\\hat x_0}{t}$, where $\\\\hat x_0$ is the predicted $x_0$ by the denoiser function. For more general $f,g$, or equivalently $\\\\alpha_t,\\\\sigma_t$, we can apply some transformations to turn it into the simple case $\\\\alpha_t=1,\\\\sigma_t=t$.\\n\\nSpecifically, if we define $x_t'=\\\\frac{x_t}{\\\\alpha_t}$, then $x_t'$ satisties the forward process $q(x_t'|x_0')=\\\\mathcal{N}(x_0',\\\\frac{\\\\sigma_t^2}{\\\\alpha_t^2}I)$. If we additionally define a new time $t'=\\\\frac{\\\\sigma_t^2}{\\\\alpha_t^2}$, the noise-to-signal ratio, then the forward process of $x_t'$ corresponds to $\\\\alpha_t=1,\\\\sigma_t=t'$. Intuitively, this creates a \\\"wrapper\\\" for the original process and turn them into the simple case. Therefore, the corresponding diffusion ODE can be written as $\\\\frac{dx_t'}{dt'}=\\\\frac{x_t'-\\\\hat x_0}{t'}$, and we can still follow the procedure in the paper to derive preconditionings.\\n\\n\\n[1] Variational Diffusion Models\\n\\n*Q2: As far as I understand, the whole discussion depends on finding a good discretization of ODE (2). Both (9) and (13) use first order (Euler) methods. Can we get more insight if we try to use a better integrator?*\\n\\n**A**: That is an insightful understanding and question. We think in the design of preconditionings, we can only rely on first-order Euler methods. Better integrators such as high-order runge-kutta methods, require multiple evaluations of the ODE drift (which involves the network) to perform a single ODE step. However, in the case of preconditioning, which is a linear combination of $x_t$ and the network output, it will be much more expensive if we combine multiple network outputs, considering that they are involved in the gradient backpropagation during training.\\n\\n*Q3: Why is $q_T$ on line 118 a 0 mean Gaussian? Do we have some condition on $\\\\mathbb{E}[q_0]$?*\\n\\n**A**: The zero-mean is only an approximation. There is no special restriction on the data distribution $q_0$. The intuition is, in practice, the data range is small (normalized to [-1,1]), while the final time $T$ is set to a very large value (like 80). This is often called a \\\"variance-exploding\\\" noise schedule. Therefore, compared to the large variance of the Gaussian distribution, the mean is relatively negligble. This can be understood from another perspective. The noisy data $x_t=x_0+t\\\\epsilon,\\\\epsilon\\\\sim\\\\mathcal{N}(0,I)$ has a very large scale when $t$ is large. For stability, before input into the network, it will be firstly normalized to something like $\\\\frac{x_t}{\\\\sqrt{1+t^2}}=\\\\frac{1}{\\\\sqrt{1+t^2}}x_0+\\\\frac{t}{\\\\sqrt{1+t^2}}\\\\epsilon$. Therefore, as $t\\\\rightarrow\\\\infty$, the component of $x_0$ will tend to 0.\\n\\n*Q4: The $\\\\lambda_t$ below equation (11) might be confused with $\\\\lambda(t)$ in equations (5), (7).*\\n\\n**A**: Thank you for your suggestion. We have revised the notation of the weighting function from $\\\\lambda(t)$ to $w(t)$.\\n\\n*Q5: $x$ and $x_t$ are used interchangeably in the RHS and LHS of the equations, better to be consistent. Example: line 182, line 276.*\\n\\n**A**: Thanks for spotting the typos. We have fixed them in the revised paper.\\n\\n*Q6: Is the code of the experiments released?*\\n\\n**A**: Due to the high cost of the training experiments and the need for proper permissions, we plan to release the code upon acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you so much for your clarification - I would like to keep my acceptance score.\"}", "{\"comment\": \"I thank the authors for the detaild answer, I apriciate all the clarification provided. As a non-expert of the subject, I keep my acceptance score since I think the paper should be accepted, but I defer to other reviewers a discussion of the relevance of the content.\"}", "{\"summary\": \"The paper titled \\\"Elucidating the Preconditioning in Consistency Distillation\\\" examines consistency distillation techniques for diffusion models, where a student model learns to follow the probability flow trajectory set by a teacher model. This distillation accelerates generation by reducing the inference steps. The paper specifically explores preconditioning, a method that combines input data with network outputs to improve stability during training. Traditionally, preconditioning has been handcrafted, but this paper introduces a theoretically optimized method named \\\"Analytic-Precond.\\\" This new approach minimizes the gap between teacher and student denoisers, thereby improving training efficiency and trajectory alignment. Experimental results demonstrate that Analytic-Precond achieves up to 3x acceleration in training across various datasets, indicating its potential in enhancing consistency models for faster multi-step generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Theoretical Innovation in Preconditioning**: The paper introduces \\\"Analytic-Precond,\\\" a novel, analytically derived preconditioning method that theoretically optimizes the consistency distillation process. This goes beyond prior handcrafted preconditionings, offering a principled approach that minimizes the consistency gap between the teacher and student models. This theoretical grounding not only strengthens the methodology but also provides new insights into consistency distillation.\\n\\n**Significant Training Acceleration**: Experimental results show that Analytic-Precond achieves 2-3x faster training in multi-step generation tasks on standard datasets. This improvement in speed is impactful, especially for resource-intensive applications of diffusion models, as it directly addresses the bottleneck of slow inference that has historically limited diffusion models.\", \"weaknesses\": [\"This paper does not provide whether BCM is better than CTM+ Analytic-Precond in terms of FID.\", \"Analytic-Precond does not perform better when GAN is incorporated into the CTM. Can the authors provide an explanation or intuition for this?\"], \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your comments. However, you said \\\"CTM, even combined with our Analytic-Precond, is slightly worse than BCM in terms of FID\\\", which means that BCM's better performance does not come from the preconditioning. Therefore, I wonder (1) is adjusting preconditioning the most efficient way to improve the performance? (2) if you combine BCM with your precoditioning methods, will it be better than BCM? If the answer to (2) is yes, I think I will give score 8. I will revise the score to 6 for now.\"}", "{\"summary\": \"This paper proposed a general paradigm of preconditioning design in consistency distillation, which is a common technique used for accelerating the inference time of consistency models based on teacher-student training (i.e., knowledge distillation). Specifically, this paper focused on preconditioning, which is a vital technique for stabilizing consistency distillation. Compared to previous hand-crafted choices of preconditioning, this paper proposed a principled way called \\\"Analytic-Precond\\\" to analytically optimize the preconditioning based on the consistency gap associated with the teacher probability flow ODE. Numerical experiments on multiple datasets are included to justify the effectiveness of \\\"Analytic-Precond\\\".\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Complete proofs are included for each proposition in the manuscript.\\n2. Extensive numerical experiments are provided to validate the effectiveness of the proposed methodology.\", \"weaknesses\": \"Presentation of the manuscript can be further improved by rewriting certain phrases and expanding on some technical details. For instance, the phrase \\\"CMs aim to a consistency function\\\" on line 134 might be better rephrased as \\\"CMs aim to learn a consistency function\\\". For possible ways of explaining technical details in a better way, one may refer to the \\\"Questions\\\" section below.\", \"questions\": \"From line 265-266 the authors mentioned that the parameter $l_t$ in \\\"Analytic-Precond\\\" is chosen to be the minimizer of the expected gradient norm $E_{q(x_t)}[\\\\|\\\\nabla_{x_t}g_{\\\\phi}(x_t,t)\\\\|_F]$ based on earlier work [1]. Would it be possible for the authors to further expand on why such choice ensures the robustness of the resulting ODE again errors in $x_t$? Which section/part of [1] discussed the reason behind such choice?\", \"references\": \"[1] Zheng, Kaiwen, Cheng Lu, Jianfei Chen, and Jun Zhu. \\\"Improved techniques for maximum likelihood estimation for diffusion odes.\\\" In International Conference on Machine Learning, pp. 42363-42389. PMLR, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our work! We appreciate your thoughtful and positive feedback on our work. We hope the responses below address your concerns.\\n\\n*W1: For instance, the phrase \\\"CMs aim to a consistency function\\\" on line 134 might be better rephrased as \\\"CMs aim to learn a consistency function\\\"*\\n\\n**A**: Thank you for carefully reading our paper and pointing out the typo. We have fixed it in the revised paper.\\n\\n*Q1: Would it be possible for the authors to further expand on why such choice ensures the robustness of the resulting ODE again errors in $x_t$?*\\n\\n**A**: Sure! The reference you mentioned actually points to another paper that applies Rosenbrock-type exponential integrators to diffusion ODEs. We would like to give an illustrative example to explain its core idea and how it enhances the robustness against errors in $x_t$.\\n\\nSuppose we want to solve an ODE from $t_n$ to $t_{n+1}$, where $t_{n+1}>t_n$:\\n\\n$$\\n\\\\frac{dx_t}{dt}=F(x_t)\\n$$\\n\\nThe core idea of Rosenbrock-type exponential integrators is to separate as much linear component from $F$, as the linear part can be analytically absorbed into a \\\"modulation\\\" of the ODE. Specifically, let $F(x_t)=-l x_t+N(x_t)$, where $l>0$ (so that the ODE is not explosive in forward time), and the non-linear part $N$ satisfies that $\\\\frac{dN(x_t)}{dx_t}\\\\approx 0$ at $t=t_n$. Then by the chain rule, the original ODE can be turned into an ODE that describes the evolution of $e^{l t}x_t$ instead of $x_t$.\\n\\n$$\\n\\\\frac{d(e^{l t}x_t)}{dt}=e^{l t}\\\\frac{dx_t}{dt}+l e^{l t}x_t=e^{l t}(F(x_t)+l x_t)=e^{l t}N(x_t)\\n$$\\n\\nDenote $h=t_{n+1}-t_n$, the Euler discretization of the original ODE is\\n$$\\nx_{n+1}=h(-l x_n+N(x_n))\\\\Rightarrow x_{n+1}=(1-hl)x_n+hN(x_n)\\n$$\\n\\nThe Euler discretization of the modulated ODE is\\n$$\\ne^{l t_{n+1}}x_{n+1}-e^{l t_{n}}x_{n}=he^{l t_{n}}N(x_{n})\\\\Rightarrow x_{n+1}=e^{-l h}(x_n+hN(x_n))\\n$$\\n\\nSuppose $x_n'=x_n+e_n$ is the perturbed $x_n$ with error $e_n$, and $e_{n+1}=x_{n+1}'-x_{n+1}$ is the resulting error in $x_{n+1}$. As $\\\\frac{dN(x)}{dx}\\\\approx 0$ at $x=x_n$, we can omit $N(x_n')-N(x_n)$ for small $e_n$. Therefore, $|e_{n+1}|=|1-hl||e_n|$ for the original ODE, which may amplify the error when $h$ is large and $|1-hl|>1$. Instead, $|e_{n+1}|=e^{-l h}|e_n|<|e_n|$ after separating the linear part and modulate the ODE.\"}", "{\"summary\": \"The paper works on _Preconditioning_, a technique used in the consistency distillation of diffusion models to obtain consistency functions that directly satisfy the boundary conditions required by the problem. Preconditioning consists in linearly linking the input to the output of a network. In the literature, the choice of linear coefficients is based on intuition. The paper introduces instead a new analytical method, called _Analytic-Precond_, for setting the coefficients. The method consists in applying a parametric discretization of the probability flow ODE, and then optimizing the parameters by minimizing the gap between the optimal student and the teacher, while keeping the discretization as robust as possible. Finally, some numerical proofs show that the derived result leads to a speed-up in the inference of diffusion models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper presents strong mathematical arguments to support the choice of coefficients, including an explanation for the CMT choices that is not just based on intuition as previous methods.\\n- The paper shows numerical proofs of the claims made, underlying when _Analytic-Precond_ offers no advantage (single step) and when it does (two or more steps).\", \"weaknesses\": [\"The paper might be a little hard to read for who is not familiar with distillation. I personally took a while to grasp the setting and all the notation. For example, $\\\\phi$ is used many times before definition. It could be worth having a brief discussion about some nomenclature like _teacher_ & _student_.\"], \"questions\": [\"The paper focuses on the case where $f=0, g=\\\\sqrt{2t}$ because of the recent literature. Is the method still applicable for other choices of $f$ and $g$?\", \"As far as I understand, the whole discussion depends on finding a good discretization of ODE (2). Both (9) and (13) use first order (Euler) methods. Can we get more insight if we try to use a better integrator?\"], \"minor\": [\"Why is $q_T$ on line 118 a 0 mean Gaussian? Do we have some condition on $\\\\mathbb{E}[q_0]$?\", \"The $\\\\lambda_t$ below equation (11) might be confused with $\\\\lambda(t)$ in equations (5), (7).\", \"x$ and $x_t$ are used interchangeably in the RHS and LHS of the equations, better to be consistent. Example: line 182, line 276.\", \"Is the code of the experiments released?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our work! We appreciate your thoughtful and positive feedback on our work. We hope the responses below address your concerns.\\n\\n*Q1: How does performance and the actual values for the preconditioners change as we change the number of samples used to estimate Eqs. 15 and 17 (expressions for the preconditioners)? How small can we make the number of samples used while still retaining good performance? Can we further boost performance using more samples?*\\n\\n**A**: We did not extensively tune the number of samples due to the high cost of training experiments. Furthermore, even with the same number of samples, the trace estimator's variance and the specific dataset partition used can lead to differing estimations. Based on our experience, selecting a sample size between 1024 and 4096 produces preconditioners that appear visually similar and generally achieve speed-ups exceeding 1.5x. However, using fewer than 200 samples introduces estimation bias and degrades performance. We also experimented with larger sample sizes, up to 8192, but observed no additional improvements. This suggests that we have exploited the potential improvement space of the preconditioning, and optimizing the preconditioning alone may have upper bound and cannot bring further benefits.\\n\\n*Q2: Did you think about consistency training? These preconditioning parameters also show up in that case, but we don\\u2019t have access to the teacher, so it is unclear how to use the ideas in this work.*\\n\\n**A**: We believe consistency distillation is a more efficient and promising approach, as consistency training from scratch typically requires more iterations and results in sub-par performance. Intuitively, diffusion models can be stably and fastly trained to give the tangent of the ODE trajectory. It averages the high-variance tangents from random data-noise pairs and offers stable predictions. Even for consistency training, preparing a diffusion models in advance may also be beneficial. For example, in Appendix B.3 in the consistency model paper, they claim that they initialize the consistency model with the pretrained diffusion model when conducting continuous-time consistency training, and this initialization significantly stabilize the training.\\n\\n*Q3: While multi-step consistency models have been observed to underperform CTMs, it would be nice to have values in some of the results reported.*\\n\\n**A**: Thanks for your suggestion. We additionally tested multi-step CM and presents the results below.\\n\\n|NFE|1|2|3|5|8|10|\\n|:--------|:-------:|:-------:|:------:|:------:|:-------:|:-------:|\\n|CM|3.54|2.94|2.93|2.95|3.22|3.30|\\n|CTM|3.57|3.00|2.82|2.59|2.67|2.56|\\n|CTM+Ours|3.57|2.92|2.75|2.62|2.50|2.51|\\n\\nAs expected, CM cannot boost its performance with more sampling steps. It needs to perform alternative backward and forward jumps. As the forward jump is stochastic, it no longer ensures trajectory consistency and instead accumulates error.\\n\\n*Q4: What dataset is used for the GAN experiment?*\\n\\n**A**: CIFAR-10 is used as other datasets are in 64x64 resolution and demand higher computational resources. We have revised the paper to specify.\"}", "{\"title\": \"Thank you\", \"comment\": \"We are glad to know our responses help. Thank you for your support and transparency.\"}", "{\"summary\": \"Consistency (trajectory) distillation typically uses a network parameterized as $f(x, t, s) = \\\\alpha_{t, s} F_\\\\theta(x, t, s) + \\\\beta_{t, s} x$, with specific expressions for the coefficient (i.e. preconditioners) $\\\\alpha_{t, s}, \\\\beta_{t, s}$ so that the boundary conditions are automatically satisfied. This paper proposes an efficient method to find alternative preconditioners that yield improved performance and faster training. They derive the expressions for the preconditioners by re-expressing the underlying ODE in terms using certain additional variables and propose simple objectives whose minimizers can be found analytically to set the values for these variables.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Consistency (trajectory) distillation represents an extremely popular approach for fast generation, solving the main drawback of diffusion models. Finding approaches to improve distillation, either the final performance achieved or its training cost, is extremely relevant to improve the practicality and applicability of large models currently being trained in multiple domains.\\n\\nThis paper tackles this problem by changing the preconditioning parameters used for the neural network. To my knowledge the procedure described in the paper is novel, and empirical results show that, for CTMs, the proposed approach yields a training speedup of 2x for multiple datasets.\\n\\nThe final proposed method is quite simple to implement, with closed-form expressions for the preconditioners. (However, these expressions involve nonlinear functions intractable expectations which are estimated with samples, meaning that the actual values obtained have both bias and variance.)\", \"weaknesses\": \"The paper deals with the consistency (trajectory) distillation case. In practice, a very relevant alternative that is also widely used (and also uses similar preconditioners for the network) is consistency training. That is, directly training fast samplers without distilling a base model. The paper does not address this problem at all. While I understand this is not necessary, and distillation in itself is an important task, it would be interesting to see whether certain ideas from this work can be extended to consistency training. (Most of the expressions for the preconditioners rely on having a pre-trained model, so unclear how to generalize this approach, if possible.)\\n\\nThe approach seems to help when using CTM with 2 steps or more. For single step generation performance and training curves pretty much overlap with and without the proposed approach. 1 step generation plays an important role for real time generation, and it would be very interesting to develop methods that can improve training and final performance in that setting as well. Additionally, the fact that the derived preconditioners are essentially the same as the ones naively used by consistency (trajectory) distillation raises the question of whether there are other preconditioners that can be used that might help in this case as well. Again, not exploring this does not reduce the paper\\u2019s merit, but I think it is an interesting question too.\\n\\nThe method does not yield any benefits when used in concert with a GAN auxiliary loss. Using this loss has been observed to lead to improved performance, and indeed, the best results reported in the paper are using the GAN loss, if I understand correctly. The proposed approach does not yield benefits (but at least does not hurt) when using this auxiliary loss.\", \"questions\": \"How does performance and the actual values for the preconditioners change as we change the number of samples used to estimate Eqs. 15 and 17 (expressions for the preconditioners)? Results in the paper are obtained estimating them for 120 different times, using 4096 samples to estimate expectations. This yields good results for 2 step sampling, at a modest computational cost. This is definitely not strictly necessary, but I think it could be interesting to see how the performance and preconditioner values change as we change the number of samples used. The estimators used have variance and bias, both of which decrease with the number of samples. How small can we make the number of samples used while still retaining good performance? Can we further boost performance using more samples?\\n\\nThis is all for distillation. Did you think about consistency training? These preconditioning parameters also show up in that case, but we don\\u2019t have access to the teacher, so it is unclear how to use the ideas in this work.\\n\\nWhile multi-step consistency models have been observed to underperform CTMs, it would be nice to have values in some of the results reported. I understand training is exactly the same, so I\\u2019m not expecting the new preconditioners to help there. But it would be nice having plain multi-step CMs in some results.\\n\\nWhat dataset is used for the GAN experiment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your time and effort in reviewing our work! We appreciate your thoughtful and positive feedback on our work. We hope the responses below address your concerns.\\n\\n*W1: This paper does not provide whether BCM is better than CTM+ Analytic-Precond in terms of FID.*\\n\\n**A**: CTM, even combined with our Analytic-Precond, is slightly worse than BCM in terms of FID. The reason is that BCM adopts techniques from improved Consistency Training (iCT), such as better scheduler function and reweighting function. As our work mainly focuses on preconditioning instead of other techniques, we demonstrate in Figure 5 that BCM's preconditioning is not superior and cannot bring improvements to CTM, while ours can.\\n\\n*W2: Analytic-Precond does not perform better when GAN is incorporated into the CTM. Can the authors provide an explanation or intuition for this?*\\n\\n**A**: Sure! The analyses of preconditioning in our work, such as the consistency gap, are built on the investigations of how to learn better trajectory jumpers and maintain fidelity to the teacher ODE trajectory. However, the incorporation of the GAN loss is merely to enhance the FID at 1-step. As shown in Figure 6, in this scenario, the consistency function no longer faithfully adheres to the teacher ODE trajectory, and one-step generation is even better than two-step, deviating from our theoretical foundations.\"}", "{\"title\": \"Thank you\", \"comment\": \"We are glad our clarification is helpful and appreciate your support.\"}", "{\"metareview\": \"This paper studies the design criteria of the preconditioning in consistency distillation and propose a novel and principled preconditioning with speedup 2x to 3x. Compared to previous hand-crafted choices of preconditioning, this submission developed a principled way called \\\"Analytic-Precond\\\" to analytically optimize the preconditioning based on the consistency gap associated with the teacher probability flow ODE. Experiments on several datasets justify the effectivenss of the proposed method. The AC recommend the authors to include the reviewers' feedback and suggestions.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, the authors addressed the issues raised by the reviewers. For example, the robustness of ODE against errors under the selected choice, combining BCM/CTM with the precoditioning methods,\"}", "{\"comment\": \"Thank you for your feedback. Though our method may not be the most efficient way, we believe it is universal for enhancing trajectory consistency and can be combined with other techniques. As BCM did not provide code on CIFAR10, we refer to and adapt their ImageNet64 version for distillation. We provide some preliminary result on 2-step FID.\\n\\n|Iteration|10k|20k|30k|40k|50k|\\n|:--------|:-------:|:-------:|:------:|:------:|:-------:|\\n|BCM|3.71|3.38|3.23|3.10|3.05|\\n|BCM+Ours|3.47|3.26|3.12|3.02|2.99|\\n\\nDespite potential implementation differences, this can serve as evidence of the applicability of our method.\"}" ] }
55oi1LCdDL
Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning
[ "Da-Wei Zhou", "Zi-Wen Cai", "Han-Jia Ye", "Lijun Zhang", "De-Chuan Zhan" ]
Domain-Incremental Learning (DIL) involves the progressive adaptation of a model to new concepts across different domains. While recent advances in pre-trained models provide a solid foundation for DIL, learning new concepts often results in the catastrophic forgetting of pre-trained knowledge. Specifically, sequential model updates can overwrite both the representation and the classifier with knowledge from the latest domain. Thus, it is crucial to develop a representation and corresponding classifier that accommodate all seen domains throughout the learning process. To this end, we propose DUal ConsolidaTion (Duct) to unify and consolidate historical knowledge at both the representation and classifier levels. By merging the backbone of different stages, we create a representation space suitable for multiple domains incrementally. The merged representation serves as a balanced intermediary that captures task-specific features from all seen domains. Additionally, to address the mismatch between consolidated embeddings and the classifier, we introduce an extra classifier consolidation process. Leveraging class-wise semantic information, we estimate the classifier weights of old domains within the latest embedding space. By merging historical and estimated classifiers, we align them with the consolidated embedding space, facilitating incremental classification. Extensive experimental results on four benchmark datasets demonstrate Duct's state-of-the-art performance.
[ "Domain-Incremental Learning", "Pre-Trained Model", "Continual Learning" ]
https://openreview.net/pdf?id=55oi1LCdDL
https://openreview.net/forum?id=55oi1LCdDL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNDb10P8RB", "NzylsGsTyT", "NpajcA7ymV", "D1S1Wyoz4s", "8jF9TVW5yP", "3hFoX8tF8h" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730572354785, 1730466092713, 1730970854982, 1730773596282, 1730455749138, 1731591460124 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5770/Reviewer_EjWt" ], [ "ICLR.cc/2025/Conference/Submission5770/Reviewer_ebkv" ], [ "ICLR.cc/2025/Conference/Submission5770/Reviewer_wWjP" ], [ "ICLR.cc/2025/Conference/Submission5770/Reviewer_f6C1" ], [ "ICLR.cc/2025/Conference/Submission5770/Reviewer_Y8cX" ], [ "ICLR.cc/2025/Conference/Submission5770/Authors" ] ], "structured_content_str": [ "{\"summary\": \"I authors propose DUCT, a method for domain incremental learning (DIL). DIL is the setting where a sequence of tasks is presented during model finetuning. The training algorithm does not have access to data from prior tasks. The authors decompose the task overfitting problem into two components: (1) representation overfitting and (2) classifier overfitting. The authors tackle to two problems separately and propose novel model-merging-inspired techniques to solve both.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The manuscript is well-organized despite the complicate method.\\n\\n(2) The method is novel.\\n\\n(3) Results are good.\", \"weaknesses\": \"(1) Many notations are not introduced before hand. This makes the math hard to follow and the method ambiguous. For example, what is $\\\\phi_i^m$ and $\\\\alpha_\\\\phi$ in equation 4? Furthermore, equations are not introduced in the correct order. For example, Eq. 5 depends on a value that is not defined until Eq. 7.\\n\\n(2) It is unclear why the proposed method is better than model merging (intuitively).\", \"questions\": \"Why do you need the two stage merging? Could you just absorb the linear weights into $\\\\phi$ and use equation (5)?\\n\\nIs there anyway you could calculate an upper bound for Table1 (e.g. performance of finetuning on the union of all datasets?)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is motivated by the forgetting problem of features and the mismatch problem of the classifier in domain-incremental learning. Then this paper proposes to address the above problems by unifying the historical knowledge at both the feature and classifier level. In particular, this paper proposes to merge the backbone of different stages and utilize optimal transport to adapt the classifier of old domains to both the new domain and the merged backbone. This paper conducts multiple experiments to demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.This paper is well-motivated. It is easy to understand the motivation of the method. It is reasonable to consider both feature extraction and classification in incremental learning.\\n2.The experimental results show that the proposed method performs better than the previous methods, which successfully demonstrates the effectiveness of the proposed method.\", \"weaknesses\": \"1.The contributions are slightly limited. Integrating the models to balance different stages of tasks has been applied in incremental learning, such as [1].\\n2.Many descriptions and notations are confusing. For example, in line 143 of page 3, a lack of explanation of \\u201cb\\u201d in \\u201cb|Y|\\u201d. In lines 179-185, the author seems to have used different words (features, representation, and embedding) to convey the same meaning, and I don't quite understand why the author did so. In Eq.(4), there lack of interpretation of , maxima value, and initial values in the summation notation. In line 257 of page 5, the description of \\u201cat most two backbones in memory\\u201d is confusing since I find there are at least three models (,, ) in memory according to lines 256. Finally, the authors are conflicted on which way to integrate, as shown in lines 256 and Algorithm 1.\\n[1] Zheng Z, Ma M, Wang K, et al. Preventing zero-shot transfer degradation in continual learning of vision-language models[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 19125-19136.\", \"questions\": \"1.What is the impact on the results if the proposed method does not use task vectors during integration?\\n2.In lines 198-200 of page 4, the authors claim that the proposed method can capture the domain-specific features of all domains. This is an interesting claim. How do the authors prove it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces DUCT, a dual consolidation technique for domain-incremental learning (DIL) that effectively mitigates catastrophic forgetting. DUCT addresses the challenge of balancing knowledge across domains by Representation Consolidation and Classifier Consolidation. The paper demonstrates DUCT\\u2019s effectiveness through extensive experiments on four benchmark datasets, showing it consistently outperforms state-of-the-art methods in terms of accuracy and forgetting measure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tDUCT introduces approach to domain-incremental learning by addressing both feature and classifier forgetting simultaneously, providing a fresh perspective on solving the catastrophic forgetting problem.\\n2.\\tDUCT cleverly combines ideas from model merging and optimal transport. The merging of task vectors with the pre-trained model and the use of optimal transport for classifier alignment are creative applications of existing techniques to the DIL context.\\n3.\\tThe paper presents comprehensive experimental results across four benchmark datasets and five task orders, demonstrating the robustness and effectiveness of DUCT.\\n4.\\tThis paper is well-organized, with clear sections and logical flow. The methodology is explained in detail, and the experimental setup and results are presented clearly.\", \"weaknesses\": \"1.\\tThere are many vision language models like CLIP that can perform zero shot. Can the author report the results of CLIP and CLIP related fine-tuning methods such as Coop, CoCoop, etc., to demonstrate the advantages of the article's method compared to these general models.\\n2.\\tI wonder what data is used to calculate the class center of the pretrained model? If using pretrained data such as ImageNet, the first issue is how to ensure consistency with downstream task categories to calculate class center similarity? The second question is how to reduce the overhead caused by large number of categories and data size?\\n3.\\tThe author should conduct experiments on more backbones to demonstrate the effectiveness of the method, such as convolutional neural networks like Resnet.\\n4.\\tTask similarity is calculated based on all categories, may it lead to the influence of some categories being overly magnified while the influence of others is ignored.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The DUal ConsolidaTion (DUCT) framework addresses feature drift and forgetting by integrating historical backbones through a representation merging technique, creating stable task-specific embeddings. Additionally, DUCT\\u2019s classifier consolidation process merges calibrated and historical weights, preserving semantic class relationships and resisting forgetting, yielding strong performance on diverse benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors analyze the current challenge of forgetting in domain incremental learning (DIL) and its underlying causes.\\nThey propose a model-merging approach that demonstrates promising accuracy.\", \"weaknesses\": \"1. Equations 4 and 5 attempt to build a unified embedding through weighted summation of model weights, raising questions about feasibility. Given the complexity and lack of interpretability in deep network weights, is this combination effective, or could it intensify conflicts within the feature space? More comprehensive theoretical analysis is required.\\n2. While the authors suggest that DIL could benefit applications like autonomous vehicles and face recognition, their experiments focus on classification tasks. Testing on more realistic applications could be more convincing.\\n3. The proposed DUCT method relies on model merging. However, as domains accumulate, previously merged models may become overly complex, containing information from multiple domains, while models from newer domains include only the latest domain data. This could lead to an imbalance between older and newer domains, creating potential confusion and forgetting.\\n4. The authors tested DUCT on ViT-B/16, but other methods, like S-Prompts, report results on the more powerful CLIP backbone. Does DUCT maintain its effectiveness on a stronger backbone?\", \"questions\": \"see the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of exemplar-free Domain-Incremental Learning (DIL) given a pre-trained embedding model. The proposed method, DUCT, jointly consolidate historical knowledge at both the representation level and the classifier level. At the representation level, the authors modify the technique of task vectors via considering the task similarity. At the classifier level, the authors propose to retrain a new classifier, and leverage the new classifier to modify the old classifier via optimal transport. To evaluate its effectiveness, DUCT is compared with DIL baselines on four cross-domain benchmarks. DUCT achieves state-of-the-art performance on all the experiments. An ablation study as well as other analytical experiments are reported to provide a more in-depth analysis of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The problem of exemplar-free domain-incremental learning is more challenging yet more practical. The authors did a good job maintaining the historical knowledge without replaying the past data.\\n2. The proposed method has strong performance, achieving a significant accuracy improvement compared to existing DIL methods.\\n3. The algorithm is simple, which could make a broader impact to the community.\", \"weaknesses\": \"1. The major concern is that, as shown in the ablation study in table 2, the main reason for the accuracy boost in DUCT could be attributed to task vector, which is an existing technique for addressing multiple tasks simultaneously. In reviewer's opinion, despite that the authors make certain modification on the weighting strategy, the paper fails to provide new insights to this technique on why it is effective in addressing DIL problem. One possible aspect the reviewer can think of is to explain why applying DUCT 'places the same class of different domains together', as suggested in the visualization in fig. 5.\\n\\n2. The notation in the paper needs improvement. First, in equation 5, $\\\\phi^m_i$ should not use $i$ as subscript as it indicates the index of the summation. Second, $\\\\beta$ and $\\\\gamma$ should be explained once they appear in line 300.\", \"questions\": \"1. As shown in fig. 2, the initial performance of DUCT on the first domain is not optimal. Please further elaborate on this issue.\\n\\n2. The parameter sensitivity analysis in fig. 3(c) indicates that DUCT still achieves decent performance when the head-merge ratio $\\\\alpha_W$ is small. What if the ratio is set to zero? \\n\\n3. Can the proposed method be applied to class-incremental learning, given that it treats classes from the incoming domain as new categories?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
55Ruu7bF4b
MambaFormer-MOE: Mamba-Transformer-based Mixture-of-Experts for Time Series Prediction
[ "Weijian Li", "Han Liu" ]
We propose the MambaFormer-MOE, a mamba-based mixture-of-experts (MOEs) model for multivariate time series prediction. There are three major features of our model. 1. We propose a temporal modeling module based-on the Mamba architecture to model temporal correlations with linear complexity. 2. We propose a cross-variate correlation modeling mechanism based-on self-attention to equip Mamba with multivariate time series prediction capability. 3. We propose a MOE mechanism that has experts that specialize in mixing the variates in different ways. It makes the model generalizable to multivariate time series from different domains. Our empirical results demonstrate that our model has SOTA prediction performance on various multivariate time series datasets.
[ "mamba", "transformer", "mixture-of-experts", "time series prediction" ]
https://openreview.net/pdf?id=55Ruu7bF4b
https://openreview.net/forum?id=55Ruu7bF4b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "Br9yHYc0LH" ], "note_type": [ "comment" ], "note_created": [ 1729018044605 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"does not follow the page limit (only 4 pages of main content), and the paper is not complete with empty experimental sections.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
55EO8gSCBT
Experimental Design for Nonstationary Optimization
[ "Darshan Patil", "Pranshu Malviya", "Maryam Hashemzadeh", "Sarath Chandar" ]
Traditional methods for optimizing neural networks often struggle when used to train networks in settings where the data distributions change, and plasticity preservation methods have been shown to improve performance in such settings (e.g. continual learning and reinforcement learning). With the growing inter- est in nonstationary optimization and plasticity research, there is also a growing need to properly define experimental design and hyperparameter search protocols to enable principled research. Each new proposed work typically adds several new hyperparameters makes many more design decisions such as hyperparame- ter selection protocols, evaluation protocols, and types of tasks examined. While innovation in experiment design is important, it is also necessary to (1) question whether those innovations are leading to the best progress and (2) have standard- ized practices that make it easier to directly compare to prior works. In this paper, we first perform an extensive empirical study of over 27,000 trials looking at the performance of different methods and hyperparameters across different settings and architectures used in the literature to provide an evaluation of these methods and the hyperparameters they use under similar experimental conditions. We then examine several core experiment design choices made by the community, affirm- ing some while providing evidence against others, and provide concrete recom- mendations and analysis that can be used to guide future research.
[ "plasticity", "continual learning", "experiment design" ]
Reject
https://openreview.net/pdf?id=55EO8gSCBT
https://openreview.net/forum?id=55EO8gSCBT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yCjm50NtGM", "xtHAjbMxLL", "wLmRL9h3lG", "smjb9KRGGX", "sjar5UOrqb", "rxdtivT6f0", "qc57FyBfjb", "qMyfKpdfXL", "pvfdwwqO0k", "oXwEv0uxTS", "lvgwYOR7pq", "gxuTyvES7z", "fMnJybSD0B", "fFVAEpQTls", "enU1CEtxun", "d9ZaSjPhVr", "ZOo6QczKBO", "XvfrikHvr9", "X5mA0Cudgg", "WbtNi1dGCe", "U6D436DZt7", "Txalkgaz4c", "Pc9jJSFUDf", "OAnGj4Xwn8", "LRRfGRQCgo", "Kf4MTSEiFL", "JFhrFcrYRL", "GFNNVrYrlq", "G41p3CQxKI", "EluvIsZho9", "9b4SZJLWng", "7MJLviMDKr", "53GHySb18d", "4FpOQuJB7n", "45cfSOz6SZ", "2VGZyqBstO" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1737524096680, 1732298819937, 1730681239429, 1732144545507, 1733274987602, 1733180220820, 1733184682280, 1733125064299, 1733125000213, 1732509198030, 1732600611917, 1730708556518, 1732144624772, 1734906968769, 1732564474580, 1732144754330, 1732509045127, 1733200219773, 1732780576876, 1732144256183, 1733124976501, 1732144532316, 1732144670549, 1732909717222, 1732613658864, 1733124888077, 1732654958955, 1732144371003, 1732509146706, 1732254855908, 1733180244606, 1732509240919, 1733168208397, 1729830460433, 1731198987062, 1732781737756 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_EYTk" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_EYTk" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_EYTk" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_MdJD" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_2oWR" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Area_Chair_itx9" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_wsyP" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_wsyP" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_2oWR" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_wsyP" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_MdJD" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_2oWR" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_MdJD" ], [ "ICLR.cc/2025/Conference/Submission10996/Reviewer_wsyP" ], [ "ICLR.cc/2025/Conference/Submission10996/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"After reading the authors\\u2019 response:\", \"comment\": \"I appreciate authors\\u2019 responses and their modifications, which I think has improved the paper\\u2019s presentation. Specifically, the revision has addressed most questions, except for the most important question above (i.e., my first question).\\n\\nOn the negative side, I think the revision (and the response) falls short of concrete improvements towards addressing the weaknesses mentioned above. In particular, the first contribution of the paper, as mentioned is the abstract, is the evaluation and comparison of existing algorithms, which I think can be potentially valuable; however the authors performed this comparisons on some of the simplest and most unrealistic benchmarks available in the literature. This significantly weakens the intended contribution (given that this is a fully empirical paper). The argument that these benchmarks are also used in existing works is not convincing, because those works also include several better and more realistic benchmarks that could have been used here (refer to the weaknesses above for pointers and detailed discussion). \\n\\nThe fact that the online baseline in the reported experiments perform as well as an average LoP method, potentially suggest that **no** Loss of Plasticity occurs in the tested environments, rendering LoP studies irrelevant on the tested benchmarks (otherwise the online-baseline\\u2019s relative performance would have considerably decayed over time due the *lost plasticity*, by definition of LoP).\\n\\nAs another weakness regarding contribution, the last line of the abstract claims to \\u201cprovide concrete recommendations and analysis that can be used to guide future research\\u201d. While the paper provides some interesting insights that can be useful for future research, I think many of these recommendations are not actually relevant for realistic continual settings for the reasons detailed in the weakness section above; and overall they do not direct the field towards addressing challenges that really matter in long term.\\n\\nOn the plus side, some of the insights and recommendations are interesting, especially for the simplified setting of non-stationary supervised learning. The gap between train and test can be important (I have personally observed it in some of my older continual learning experiments). I also think sensitivity analyses like Fig. 8 are useful for future use of these methods.\\n\\nRegarding my score, the grounds for rejection are not due to mistakes or inaccuracies in the paper but rather my judgment of the overall contribution of the paper, considering the rather high bar for ICLR. This judgment is, of course, influenced by my subjective perceptions of what matters in LoP research, which may prove inaccurate with the test of time. I actually lean toward a score of 4 (but unfortunately the possible scores are quantized at 3 and 5). Given all the improvements in the revision and the remaining concerns, I am keeping the score of 3 and reducing my confidence to 2, favouring judgment about the contribution from other expert reviewers.\"}", "{\"summary\": \"The paper presents an empirical study comparing various existing methods to mitigate plasticity loss in continual learning. The paper makes two contributions: (1) a comparison of existing methods under a unified setting, and (2) an evaluation of and suggestions for hyperparameter selection protocols, number of seeds, and train vs. test accuracy evaluation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Continual learning is a rapidly growing and important research area. An independent comparison of existing algorithms under a unified setting aids in identifying the most effective techniques, thereby guiding future research.\", \"weaknesses\": [\"The study setting has significant limitations and is far from realistic for continual learning. My main concern is the relevance of the proposed recipes to more realistic continual learning scenarios. Below, I summarize different ways in which the study\\u2019s setting is limited.\", \"Distribution shifts: The paper uses simple forms of distribution shifts, namely pixel permutation, label permutation, and noisy labels. Although these methods are frequently used in existing literature, they are highly unrealistic. Pixel permutation, for example, never occurs in real-world scenarios (and the use of CNNs for these types of images is questionable). Moreover, these distribution shifts are simple to address, as they can be resolved by adjusting weights in the first or last layer. While benchmarking is an unresolved problem for continual learning and good benchmarks are still limited in the literature, more realistic distribution shifts have been proposed in other works (e.g., see the first three environments studied in Dohre et al., 2024), which could be used in the current paper.\", \"Test/Train sets: In a realistic continual learning setting, there is no separate test or train set. Instead, there is a single, continuous stream of incoming data, to which a model adapts. Cumulative online loss serves as the primary performance measure.\", \"Random seeds: True continual learning settings do not involve random seeds for the same reason discussed above, especially in scenarios with sufficiently long data streams.\", \"Hyperparameter selection: Continual learning is best achieved through continual optimization, which includes algorithms for continual hyperparameter optimization. Here, hyperparameters (e.g., learning rate) are optimized and updated at every training step, over the whole lifetime of agent. See, for example, IDBD, Hypergradient Descent, and MetaOptimize.\", \"I understand that these limitations are also present in many existing works. While evaluations in limited settings are acceptable in experimental sections for papers introducing new algorithms, such limitations are insufficient for an empirical study aiming to provide guidelines for future research.\", \"Lastly, the scale of the experiments and models used is relatively small for a fully empirical study.\"], \"questions\": \"Could the authors conduct experiments on more realistic benchmarks from the literature or propose a new, more realistic benchmark?\\n\\nIn Fig. 9, the Online baseline (with no LoP technique) outperforms some LoP algorithms. What is the reason for this? Could it be due to insufficient hyperparameter tuning? \\n\\nThe paper suggests that test accuracy rankings differ significantly from train accuracy rankings. Could the authors quantify this gap for different algorithms? Although the rankings change, the actual performance difference might be minor. \\n\\nHow would CReLU perform compared to other methods if it used the same number of weights? \\n\\nWhat do the two curves in Fig. 3 represent? There are no legends. \\n\\nThere are also a few typos on page two.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 2/2\", \"comment\": \"**[Takeaways from 4.2]**\\nWe have updated the writing in 4.2 and updated Figure 2 to provide more clear takeaways. Specifically, (1) we see that a well tuned Online baseline actually performs surprisingly well (2) Training accuracy starts to saturate on 4 out of the 6 settings we study (3) For test performance, Hare and Tortoise and CReLU do well across almost all settings, Shrink and Perturb does well on with ResNet, and L2-init does well with MLP and badly with ResNet. \\nSince we are selecting the best configuration for each method, hyperparameter sensitivity probably does not have a huge effect on these results. Further analysis of why each method works to the level it does, while useful, we believe is out of scope for this paper.\\n\\n**[Non aggregated analysis of Section 4.3]** \\nWe have added Figure 15 and Table 3 that present a non aggregated version of the results in Section 4.3. Similarly to the full version, when considering every configuration, there is a positive correlation. When restricting to the top 20% of configurations, the correlation disappears for most of the methods/settings. Thank you for this suggestion.\\n\\n**[Non aggregated analysis of Section 4.4]** \\nWe have added Figures 10 and 11 to Appendix D.1 that present a non-aggregated version of the Figures in Section 4.4. They show that there is in fact some difference between the methods in terms of how sensitive they are to the number of seeds used. For example, other than the Online baseline on Noisy CIFAR-100 with ResNet, the configuration evaluated as best in the search is very likely to at least be a top 3 configuration even with 1 seed (Figure 10b). We can also see some differences in how many seeds it takes to accurately rank the different configurations for a method in Figures 11a and 11b. Thank you for this suggestion.\\n\\n**[Bootstrapping typo]** \\nThank you for pointing it out, it has been fixed.\\n\\nWe thank the reviewer for their valuable comments and helping improve our work. If we have addressed your questions and concerns, we would appreciate it if you could further support our work by increasing your score. If there are more questions/concerns, please let us know.\"}", "{\"comment\": \"Hello, thank you for your response and your continued interaction. Unfortunately, we did not have time to implement the necessary changes required to show equivalence of our setting and run the requested experiments before the deadline We address the differences between our setup and the setups in other works below:\\n\\n**[ViT performance lower on CIFAR-100]** \\nThe main cause of this is that we are using a smaller version of the ViT used in the Hare and Tortoise paper. The one in that paper had 12 layers, whereas due to time/resource constraints, we only used 4 layers for these experiments. They also use data augmentation and a learning rate schedule with warmup. The combination of these improves performance significantly.\\n\\n**[ResNet performance on CIFAR-100]** \\nThere are a few differences between our setup and the setup presented in Dohare et al. 2024. The first as you point out is data augmentation. The second big difference is the use of a learning rate schedule. Dohare et al use a learning rate schedule for their CIFAR-100 experiments where they decay the learning rate to by a factor of .2 three times per task. The third is that although the last split is noise free, each split has only 10% of the total data, and the dataset is not additive like Incremental CIFAR-100. All 3 of these factors have a big effect on the final accuracy.\\n\\nUnfortunately, because of time constraints, we weren\\u2019t able to run the experiments that you requested (as they require not only running experiments with the larger models, but also modifying the structure of our code to enable the extra features such as data augmentation and learning rate schedules). We hope that the explanations above can help explain the discrepancy.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nSince this is the last day where reviewers are allowed to communicate with the authors, we kindly ask you to let us know whether our additional results and changes have addressed your concerns, and if so, to please adjust your score accordingly.\\n\\nThank you, \\nThe Authors\"}", "{\"comment\": \"Thanks to authors for their effort in addressing the raised concerns about realistic benchmarks, and for conducting experiments on Continual CIFAR100. Given these improvements, I raise my score from 3 to 5.\"}", "{\"comment\": \"Hello, we would like to inform you of our expanded results using Vision Transformers on Incremental CIFAR-100. The main results can be seen in Global Response #2. We would greatly appreciate it if the reviewer can take a look at these results as well as our new manuscript which we reorganized for clarity and let us know if they have any further questions or concerns. If not, we would appreciate it if you could further support our work by increasing your score.\"}", "{\"comment\": \"Thank you for your response. We would like to let you know that we have finished running experiments with Vision Transformers on the Incremental CIFAR-100 setting described in [1]. The main results are summarized in Global Response #2, and they are all generally in line with our previous conclusions. We thank the reviewer for pushing us to do these experiments as they did make the evidence for our conclusions stronger. We now have experiments that provide coverage on two realistic data shifts (Noisy and Incremental CIFAR-100) and two realistic architectures (ResNets and Vision Transformers). If we have addressed your concerns about our work, we ask that you please support our work further by raising your score.\\n\\n[1] Dohare, Shibhansh, et al. \\\"Loss of plasticity in deep continual learning.\\\" Nature 632.8026 (2024): 768-774.\"}", "{\"comment\": \"Hello, as the discussion period is ending soon, we would like to hear your thoughts about our updates and our response to your review. If we have addressed your concerns, we would appreciate it if you could further support our work by increasing your score.\"}", "{\"comment\": \"Thanks for the clarification.\\nGiven that, I think the last sentence in the line 431 should be updated.\\nWill update the score.\"}", "{\"summary\": \"The authors investigate the experimental procedures that are used to evaluate algorithms in continual learning settings.\\nThe submission points out that these practices are usually unaddressed or only implicitly addressed in current experimental work in continual learning.\\nBy focusing on a well-curated set of datasets, nonstationarities, methods and architectures, the submission poses a critical analyses of these practices and implicit assumptions.\\nA few interesting conclusions include that maintaining trainability may not be indiciative of generalizability, and that several tasks may be needed to evaluate a set of hyperparameters for continual learning.\\n\\nI am currently rating this paper as marginally below acceptance. However, I am willing to increase my score if some of the concerns below are addressed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The problem being addressed in this submission is timely, understudied and important.\", \"Empirically, the submission poses and answers several important questions regarding best practice for non-stationary optimization.\"], \"weaknesses\": [\"While most of the paper provides concrete takeaways which question current practice, there are some inconclusive results that could use further clarification\", \"Several of the results aggregate performance across several different categories of methods (e.g., architectural(crelu), regularization and resetting). A few analyses on individual methods, non-aggregated, would improve the empirical results and add clarity.\", \"The presentation of the paper is mostly good, but with room for improvement (see below).\"], \"questions\": [\"Figure 2 (presentation): It is difficult to infer conclusions from this plot due to the number of baselines and settings being compared in a single graph. I wonder if this information is not better represented in a table, because the relative ordering of the settings (on the x-axis) and the placement of MLP next to ResNet does not have any semantic meaning in the presentation of these results.\", \"Figure 2 (results): I am surprised that there is so much variation in the results for different combinations of methods and nonstationarities. One thing potential problem is that the ranking is too sensitive to statistically insignificant performance differences. Do you know how these results would look like if average test accuracy was reported instead?\", \"Figure 3: The lines in these graphs are not labelled, I assume one is for protocol 1 and the other is for protocol 2? If that is the case, I do not see that large of a difference between the two (except on shuffled CIFAR-10). Thus, I am uncertain about the conclusion that \\\"protocol 2 transfers better to the unseen data\\\". The conclusion suggested at the end of Section 4.1 does not seem well-supported by this data.\", \"Clarification for statistical bootstrapping: what exactly is being resampled for the estimate? It is not clear how \\\"resampling the seeds\\\" means, because bootstrapping usually involves resampling from some data to construct an estimator.\", \"Clarifying seeds in Section 4.1: Why are the total number of seeds quoted (n=20) unequal between protocol 1 and 2? It seems like that the seeds are partitioned between model selection and evaluation? As I understand the second paragraph, 10 seeds are used for model selection and 10 seeds are used for evaluation, yielding the total of 20? But in that case, why does protocol 2 only use 5 seeds?\", \"There seems to be no clear takeaway in Section 4.2: it would be helfpul to also investigate the contributing factors for generally well-performing methods. For example, are the methods performant because they are more robust to hyperparameters (and hence, protocol 2 can easily identify good hyperparameters)? I do not think Appendix C answers this question.\", \"Section 4.3: The strong conclusion here is valuable. I wonder if this conclusion depends on the plasticity-preserving method. I am not able to tell from Figure 4, but presumably some methods may better correlate train and test accuracy, which would be hidden in this combined analysis.\", \"Section 4.4: Again, I wonder if the number of seeds needed to identify a good hyperparameter configuration depends more strongly on the method used for training, rather than the aggregate analysis.\", \"** Minor Comments\", \"Many instances of \\\"boostrapping\\\" should be replaced with \\\"bootstrapping\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1/2\", \"comment\": \"We thank the reviewer for their comments. We are glad that the reviewer appreciates that our work provides a unified and independent evaluation of the works done by the plasticity community. We address the reviewers' questions and concerns below.\\n\\n**[Use of unrealistic benchmarks]** \\nOur use of the datasets and nonstationaries studied was motivated by the fact that these are the types of settings that are in fact used in plasticity research, and we wanted to give empirical recommendations to the plasticity community. Several recent works [1,2,3,4,5] have used these settings as a way to simplify and study the core problem, including Dohare et al., 2024. We also do acknowledge the need for newer and more difficult settings in our discussion of our results in Section 4.3. We also would like to point out that the Noisy distribution shift is representative of a fairly realistic distribution shift where you get higher quality data over time, and is encountered in both the supervised setting and the reinforcement learning setting. Given that these are commonly used settings and we do include a realistic setting, we do not think this should be a reason to reject the paper.\\n\\n**[Use of Train/Test sets]** \\nOur work aims to facilitate the research that people have started doing in nonstationary optimization/plasticity. While the setting with only a stream of data is a valid continual learning setting, there are several use cases, both in research and in real systems, where there would be a train/test split in order to quantify the generalization capabilities of the system. Many works [1,2,3.4,5] in both the general continual learning field and in the plasticity field have also introduced such a split, as it allows us to better probe the abilities of these systems.\\n\\n**[Use of random seeds]** \\nSimilar to our point above, we agree that when continual learning systems are deployed, it probably would not make sense to have random seeds. The goal of our work, however, is to facilitate the research on these systems, and being in order to evaluate whether the methods we are developing are overfitting to a single task sequence or initial random conditions, we need to evaluate with multiple seeds, as has been done throughout the continual learning literature [1,2,3.4,5].\\n\\n**[Continuous hyperparameter selection]**\\nContinuous hyperparameter selection is a separate research problem within continual learning. It would most likely improve the performance of the methods we study, but as we are trying to facilitate and provide recommendations for research in plasticity, applying those methods in our work when they are not used in the plasticity community would make our recommendations less applicable and helpful to this community.\\n\\n**[Limited settings]** \\nOur goal with this work was not to create a realistic continual learning system or to provide empirical guidelines on how to create a realistic system. It was to provide empirical research recommendations to the growing plasticity community. The work in this community tries to create methods that can deal with nonstationary optimization and to study it, we need to make simplifying assumptions and make sure we are providing repeatable results. This could lead to limited settings, but is also useful to make progress on the problem.\\n\\n**[Small scale of experiments]** \\nAs a part of this study, we ran 27,600 trials across multiple datasets, distribution shifts, architectures, and methods. The ResNet architecture is also one of the larger architectures studied in plasticity research. We believe that this is an acceptable scale for such an empirical study.\\n\\n**[Online algorithm doing better than baselines]** \\nWe are not sure of how well the online baseline was tuned in prior work, but one of the contributions of our work is showing that a well tuned online baseline (with L2 regularization, we allow L2 regularization for all methods except L2-Init) can be quite competitive and beat out several existing plasticity methods.\\n\\n**[Gap between train and test accuracies for different methods]** \\nWe have moved the original Figure 2 to the appendix (it is now Figure 14), and replaced it with a bar chart (Figure 2a) showing the train and test accuracies achieved by each method. Here, we can see that it is not just the ranking but the actual performance that can change significantly going from train to test. For example, methods such as L2-init and Hare and Tortoise are worse than the best methods on training accuracy by 10-20% on both Shuffled CIFAR-10 and Permuted CIFAR-10, and yet they are the best and second best methods for test accuracy on those two settings. We can also see in Figure 2b that several of the method rankings generated when selecting for the best train accuracy are actually anti correlated with method rankings generated when selecting for test accuracy.\"}", "{\"metareview\": \"In this paper, the authors address experimental design for methods addressing the training of deep networks in settings that involve optimizing the model on changing data distributions, i.e. that involve non-stationarity or require \\\"plasticity\\\" such as continual learning or reinforcement learning. The authors argue for more rigorous and careful evaluation to compare methods and prior work and thus enable progress. The authors perform a comprehensive experimental study, examine existing practices and then provide recommendations.\\n\\nThe reviews were borderline but not high variance with two leaning accept and two leaning reject (5, 5, 6, 6). The reviewers found the experiments extensive, the topic important and timely and the insights useful. As weaknesses the reviewers noted that they found the experiments somewhat \\\"unrealistic\\\" (e.g. permuting pixels instead of down-sampling), some of the architectures inappropriate for the given task, some lack of novelty in the recommendations, some lack of clarity / inconclusiveness of results and then issues with clarity / typos in the writing.\\n\\nThere was significant discussion between the reviewers and the authors, and afterwards between the reviewers and the AC. Multiple reviewers agreed to revise their scores upwards after reading the author response, which seemed compelling. The main point of discussion between the reviewers / AC centered around the concerns regarding how realistic the distribution shift settings were. Note, one reviewer who was leaning 6 stated that they were now leaning reject (but presumably could no longer update their score). Another who had put 5 stated they were now leaning more towards 3 (pointing out that they realized that the authors used a much smaller network than the standard ViT in their experiments). One reviewer found that the experiments were appropriate, while the other three found them too unrealistic, one stating \\\"the experiments remain insufficient for a paper claiming \\\"experimental design in non-stationary optimization\\\" in its title.\\\"\\n\\nThe discussion with reviewers seemed to establish a consensus that the paper fell below the bar for acceptance. Therefore the recommendation is to reject the paper. However, the reviewers all found the work timely and important, and thus the authors are encouraged to strengthen the paper for a future submission.\", \"additional_comments_on_reviewer_discussion\": \"As written above, there was discussion between the authors and reviewers. In summary, the authors addressed concerns regarding how unrealistic the experiments were, justified model choices, responded to concerns of lack of novelty, improved the clarity and addressed typos and added new experiments using ViTs on Incremental CIFAR-100. The reviewers increased their scores after reading these (one from 3->5 and others by one point).\\n\\nThe AC started a discussion regarding concerns on how realistic the experiments were. This discussion really moved the consensus of the reviewers toward reject. One reviewer who was leaning 6 stated that they were now leaning reject (but presumably could no longer update their score). Another who had put 5 stated they were now leaning more towards 3 (pointing out that they realized that the authors used a much smaller network than the standard ViT in their experiments). One reviewer found that the experiments were appropriate, while the other three found them too unrealistic, one stating \\\"the experiments remain insufficient for a paper claiming \\\"experimental design in non-stationary optimization\\\" in its title\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe have submitted detailed responses to all the reviewers' concerns, along with a general response summarizing the changes made. With only one day remaining in the discussion period, we kindly ask for confirmation on whether our replies have addressed the reviewers' concerns, and, if so, to consider adjusting your score accordingly.\\n\\nYour feedback is crucial to improving the quality of the paper, and we would greatly appreciate your engagement before the deadline.\\n\\nThank you for your time and consideration.\\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"We thank the reviewer for their comments. We are glad that the reviewer appreciates the insights our work provides on trainability vs generalizability and designing resource constrained experiments. We address the reviewers' questions and concerns below.\\n\\n**[Various typos]** \\nThank you for pointing all of these out. We have fixed them and worked to improve the writing quality of the paper.\\n\\n**[Protocol 3 comparison]** \\nThank you for your suggestion. We have added a comparison of Protocol 3 to the other protocols to Figure 3. We see that the method rankings generated by Protocol 3 are either worse or within the confidence interval of Protocol 2. Furthermore, even if Protocol 3 manages to generate an accurate method ranking, in section 4.5, we see that it is not good identifying the best configuration for each method. Doing well in a setting like Protocol 3 is necessary to create \\u201clifelong\\u201d agents that have unbounded lifetimes. While we do not claim that our version of this protocol is optimal, the underperformance at least invites further research. \\n\\n**[Why is the number of seeds and tasks different in different types of networks, same for the gradient steps?]** \\nThe number of seeds and tasks are lower for the ResNet experiments just for practical compute reasons, as it takes longer to train ResNets and you can fit fewer runs onto a single GPU. The number of gradient steps per task were set based on approximately the amount of time it took for training to converge.\\n\\nWe thank the reviewer for their valuable comments and helping improve our work. If we have addressed your questions and concerns, we would appreciate it if you could further support our work by increasing your score. If there are more questions/concerns, please let us know.\"}", "{\"comment\": \"Thank you for your response. We are glad you appreciate the improvements in the quality of the paper. We address your concerns below:\\n**[Comment on Protocol 3]** \\nOur point about protocol 3 being necessary to create lifelong agents that have unbounded lifetimes was perhaps more opinion than fact. We meant to convey the fact that one goal of lifelong learning is to create agents with potentially unbounded lifetimes, which necessarily means that we select hyperparameters (or meta-hyperparameters assuming a system where we adapt hyperparameters in the agent\\u2019s lifetime) using a smaller timeframe/fewer tasks than we want the agent to run. Protocol 3 is one way of doing that (not necessarily the only way, but at least one that has been proposed in the literature), and it does not do a great job at selecting the most successful agents. We simply wanted to highlight this as an invitation to the community to either create methods that work well with Protocol 3 or create new protocols that have the property that they can select successful agents that do well on lifetimes longer than what the protocol sees in the selection stage. We have updated our discussion of Protocol 3 in the paper. Thank you for your question.\\n\\n**[Training accuracy of Permuted CIFAR-100]** \\nThe original figure displayed a ranking for the methods rather than the actual values. For Permuted CIFAR-100, when optimizing for best average test accuracy, each method gets very close to 100% (with averages ranging from .99956 to 0.99977), essentially perfect training accuracy. Thus, the mean train accuracy is different between the methods, allowing us to create a ranking (the original Figure 2 plot), but the difference is negligible, which is why we replaced the original plot.\\n\\n**[Figure 2 caption]** \\nWe have updated the caption.\\n\\nPlease let us know if this addresses your questions. If so, we would appreciate it if you could further support our work by increasing your score. If there are more questions/concerns, please let us know.\"}", "{\"comment\": \"Thank you for your response. I appreciate the work you've put into running the new experiments and adding more realistic benchmarks.\\n\\nHowever, currently, I am not confident in the new results for a couple of reasons listed below. \\n\\n- Why is the performance of all the methods so poor on Incremental CIFAR-100? What is the performance of your model on the full dataset? From my knowledge, if we use l2 regularization, Adam, data augmentation, etc., with ViT, we can get close to 60% test accuracy on CIFAR-100 (for example, see Figure 5 in the Hare and Tortoise network paper). In a class incremental case, for a network resetting baseline, the average accuracy across all tasks could be close to 65%. While all the methods that you plot stay close to 40-45%. Can you plot a network resetting baseline for this experiment? That'll tell us if there is an issue with your experiment setup or if there is indeed a dramatic drop in performance in the increment CIFAR case.\\n- Similarly, why does your ResNet-18 perform so poorly on noisy CIFAR-100? An 18-layer ResNet on CIFAR-100 can reach 75%+ test accuracy (for example, see Extended Data Fig. 1 in Dohare et al. 2024). Your results show that all methods are between 25 and 40%. Can you plot a network resetting baseline for this experiment? Note that data augmentation can generally only explain about a 5-7% drop in test accuracy. Again, either there is an issue with your experiment setup, or there is indeed a dramatic drop in test accuracy in the noisy CIFAR case.\\n\\nAs an additional point, I still don't understand why the permuted CIFAR-100 with the ResNet experiment is still in the paper. The ResNet performs extremely poorly in that case, as test accuracy is only about 20-30%. It's probably no better than a similary sized full connected feed forward network.\"}", "{\"comment\": \"Thank you for your response. We are glad you appreciate the improvements in the quality of the paper and that you feel that we have addressed most of your concerns.\\n\\nRegarding the use of \\u201crealistic\\u201d settings, we have launched vision transformer experiments for class incremental CIFAR-100, and hope to have them finished before the end of the discussion period (although not before the end of the manuscript update deadline). We would like to point out that, as you have noted, we do have a \\u201crealistic\\u201d shift in the Noisy CIFAR-100 and Noisy CIFAR-10 experiments. We\\u2019d also put forth two other points: (1) the other settings being mentioned even in Dohare et al. can also be argued as not realistic. For the supervised learning experiments, aspects such as scaling down larger images to 32x32, using simple tasks such as binary classification or tasks with data on the order of ~1200 samples, resetting the head of the network on each task or doing early stopping on tasks all can be argued as being unrealistic continual learning scenarios. Despite this, the experiments in the paper are still very useful for learning about plasticity. (2) There are always more benchmarks that can be added to a paper such as ours, and thus it can always be a reason for rejection. We believe, however, that our work contains enough interesting messages for the community and is thus worthy of publication.\"}", "{\"title\": \"Global Response\", \"comment\": [\"We thank all the reviewers for their comments and questions. We are glad that the reviewers found our work timely (2oWR), novel (wsyP, 2oWR), and that our insights could help guide the community in designing better (wsyP, 2oWR, EYTk,MdJD) and more efficient (wsyP, MdJD) experiments.\", \"We respond to specific concerns below, and list the changes we made to the manuscript here:\", \"We did an overall polish of the writing, fixing and fixed typos.\", \"We moved the original Figure 2 (the plot showing the method rankings) to the appendix (now Figure 14). We replaced it with a bar plot showing the actual train and test accuracies achieved by each method on each setting, and a heatmap showing how well the rankings generated in each setting correlate with each other.\", \"We changed Figure 3 from a line plot to a bar plot and added Protocol 3 to the evaluation.\", \"We improved the general readability of Figure 4a by making the points smaller and differentiating the trendlines.\", \"We moved Figures 5d-f in the original manuscript to the Appendix (Figure 9).\", \"We updated Sections 4.1 (Comparing Protocols), 4.2 (The Performance of Current Methods), and 4.3 (Correlation between Training and Testing Plasticity) with more nuanced claims and discussion.\", \"We added a section to the Appendix describing our Statistical Bootstrapping procedure.\", \"In Appendix D.1, we added Figures 10 and 11 showing the effect of the number of seeds on the hyperparameter search for each method.\", \"In Appendix D.5, we added Figure 15 showing the relationship between train loss plasticity and test loss plasticity. We also added Figure 16 and Table 3, showing the relationship between train accuracy plasticity and test accuracy plasticity for each method.\", \"We also would like to inform the reviewers that we have two more sets of experiments currently running, which we will update the paper with when they are finished:\", \"A set of experiments with CReLU where we reduce the architecture size to control for the number of parameters.\", \"A set of experiments structured similarly to the ResNet experiments with a Vision Transformer network.\"]}", "{\"comment\": \"Thank you for your response. We would like to let you know that we have finished running experiments with Vision Transformers on the Incremental CIFAR-100 setting described in [1]. The main results are summarized in Global Response #2, and they are all generally in line with our previous conclusions. We thank the reviewer for pushing us to do these experiments as they did make the evidence for our conclusions stronger. We now have experiments that provide coverage on two realistic data shifts (Noisy and Incremental CIFAR-100) and two realistic architectures (ResNets and Vision Transformers). If we have addressed your concerns about our work, we ask that you please support our work further by raising your score.\\n\\n[1] Dohare, Shibhansh, et al. \\\"Loss of plasticity in deep continual learning.\\\" Nature 632.8026 (2024): 768-774.\"}", "{\"title\": \"Response 1/2\", \"comment\": \"We thank the reviewer for their comments. We are glad that the reviewer finds our work timely, understudied and important. We address the reviewers' questions and concerns below.\\n\\n**[Inconclusive Results]** \\nWe have updated the paper with more concrete takeaways (specifically for sections 4.1, 4.2, and 4.3). If there are other parts that the reviewer would like expanded on, please let us know.\\n\\n**[Non-aggregated analysis]** \\nWe thank the reviewer for their useful suggestion. We have added Figure 15 and Table 3 that do a non-aggregated analysis of the results in Section 4.3 and Figures 10 and 11 that show a non-aggregated analysis of the results in Section 4.4 (please see below for further discussion). Furthermore, Figure 12 (present in the original manuscript) shows a non-aggregated analysis of the results in 4.6. \\n\\n**[Improvement of Figure 2]** \\nWe have moved this Figure to the appendix (Figure 14), and replaced it with a plot showing the actual values of the train and test accuracies used to generate the rankings, and a plot showing how much the different rankings correlate with each other. From this Figure, we can see that while there are a few overlaps in the error bars (representing 95% empirical confidence intervals), most of the results would stay fairly stable.\\n\\n**[Figure 3 results]** \\nWe have updated Figure 3 to be a bar plot and added the missing legend. We also added Protocol 3 to the evaluation. Each bar represents the performance of one of the protocols described in Section 3.3. You are right that perhaps our original claim was too strong. Protocol 2 is not outright better than the other protocols, but it is the only one that is not outright beaten by another protocol (taking into account the confidence intervals), and it is the outright best on Shuffled CIFAR-10. It outright beats Protocol 1 on 2 of the 6 settings, and approximately matches performance (within CI) on the other 4. Protocol 3 similarly is outright worse on 2 settings compared to Protocol 2, and within the CI on the other 4. Thus, we argue that there is not much advantage to be gained from using Protocols 1 or 3, and at least a few settings where it is disadvantageous to do so. We have updated Section 4.1 with this discussion. \\n\\n\\n**[Clarification for statistical bootstrapping]** \\nWith statistical bootstrapping, we sample $B$ datasets of size $n$, compute some statistic using each of these datasets, and use the empirical distribution over the statistic to create confidence intervals or compute standard errors for the sample mean of the statistic. In our case, we are sampling partitions over the seeds, $P$, and using the partition to estimate some statistics that are a function of this partition, $f(P)$. Specifically, statistics such as the *rank correlation between the rankings generated by two protocols* or the *binary random variable indicating whether the best config was selected by a protocol* are a deterministic function of the partition. In our case, since when people do hyperparameter search, they usually only sample one partition, we set $n=1$ and $B=1000$. Thus, we are sampling $1000$ different partitions (with replacement), calculating the statistic for each of the partitions, and displaying the 95\\\\% empirical confidence interval that results. We have added a discussion of this to the appendix.\\n\\n**[Clarifying seeds in Section 4.1]** \\nOf the 20 seeds, 10 are used to create a held out ranking. These are not available to any of the protocols when doing model selection or evaluation. The idea is that they represent unseen task sequences, and we can see how the method rankings generated by each protocol transfer to unseen task sequences. Protocol 1 uses the validation accuracy of all 10 of the available seeds to select the best configuration for each method and uses the test accuracy of those same seeds to rank/evaluate them. Protocol 2 uses the test accuracy of 5 of the 10 available seeds to select, and the test accuracy of the other 5 to rank/evaluate. We also add Protocol 3 to our evaluation, which uses the test accuracy of all 10 seeds over the first 10% of tasks to select, and the test accuracy of all 10 seeds over the latter 90% of tasks to rank/evaluate. We have updated the writing in 4.1 to make this procedure more clear.\"}", "{\"title\": \"Response 2/2\", \"comment\": \"**[CReLU with equal number of weights]**\\nWe are currently running this experiment and will update this post and the manuscript when we have the results. Likely for MLP, the results would not be significantly different, as to get an equal number of parameters, you can reduce the hidden size from 128 to 120. For ResNet, the results might change, as you need to reduce the number of filters from 64 to 45.\\n\\n**[Figure 3 Legend]** \\nWe have updated Figure 3 to be a bar plot and added the missing legend. We also added Protocol 3 to the evaluation. Each bar represents the performance of one of the protocols described in Section 3.3. Thank you for pointing out the missing legend.\\n\\n**[Typos on page two]** \\nThank you for pointing them out, we have updated the manuscript to fix the writing errors.\\n\\n\\nWe thank the reviewer for their valuable comments and helping improve our work. If we have addressed your questions and concerns, we would appreciate it if you could further support our work by increasing your score. If there are more questions/concerns, please let us know.\\n\\n[1] Dohare, Shibhansh, et al. \\\"Loss of plasticity in deep continual learning.\\\" Nature 632.8026 (2024): 768-774. \\n[2] Kumar, Saurabh, et al. \\\"Maintaining Plasticity in Continual Learning via Regenerative Regularization.\\\" Conference on Lifelong Learning Agents. PMLR, 2024. \\n[3] Lee, Hojoon, et al. \\u201cPlastic: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning.\\u201d Advances in Neural Information Processing Systems, vol. 36, 2024. \\n[4] Elsayed, Mohamed, et al. \\\"Weight Clipping for Deep Continual and Reinforcement Learning.\\\" Reinforcement Learning Conference. \\n[5] Lee, Hojoon, et al. \\\"Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks.\\\" Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"I agree that it is unclear where exactly we should draw the line for \\\"realistic\\\" experiments. But I think we can all agree that permuting pixels is significantly more unrealistic than downsampling images. And, as I said before, I understand very well why simple experiments, even if they are unrealistic, can be instrumental in demonstrating and understanding a phenomenon. My main issue is that this paper doesn't have enough realistic settings, and I do not think that the current version of the paper crosses the line for a sufficient number of realistic benchmarks.\"}", "{\"comment\": \"I appreciate the authors' detailed reply. The clarity on some points has definitely improved, some comments:\\n\\n- Improved takeaways:\\nUnfortunately, I cannot tell what has changed in the submission. You should highlight changes made in the updated submission with blue text. It is almost impossible to tell what has changed, apart from the table. As a result, I am unable to truly appreciate the changes the authors made.\\n\\n- Non-aggregated results: while I see the new figures in the appendix, I am not sure what to take away from them. I would expect that certain methods that are more robust or performant would have higher probability of finding a successful configuration. Unfortunately, this not currently obvious.\\n\\nReading the other reviews, I would agree that the presentation of the paper is still a weakness (although it is improved). My suggestion (for a potential camera-ready/revision/resubmission) is to narrow the focus of the questions in the main text. The introduction lists 6 questions, the discussion at the end lists 8 takeaways. I would limit both of these to exactly three of whichever you think are the most interesting. One example of an interesting takeaway is that train does not correlate with test for loss of plasticity. In contrast, referencing appendix results in the takeaways is uninteresting and out of place. Limiting the main questions being asked will improve presentation, and allow the conclusive results to shine without being held-back by other less conclusive results (which can still populate the appendix).\\n\\nOverall, I think the authors have improved the paper. In particular, I would disagree with the other reviewer: the choice of datasets/nonstationarities is fine. Not only is it a common point of reference with respect to published literature, I also think it is sufficiently complex for benchmarking (although, clearly not realistic). I am going to continue to monitor discussion with other reviewers. It is possible that I will marginally increase my score, but I will see after the discussion period.\"}", "{\"title\": \"Global Response #2\", \"comment\": \"We thank all the reviewers for the discussion and comments on our work so far. We think that our paper has significantly improved through this process. We would like to inform you that we have finished running experiments with Vision Transformers on Incremental CIFAR-100, in a setting similar to Dohare et. al, 2024 [1] (the main difference being that we do not perform early stopping on the tasks). In this setting, the 100 classes of CIFAR-100 are divided into 20 splits, and on each task, one of the splits (i.e. 5 classes) is added to the dataset that the model is trained and evaluated on. We increase the number of training steps on each task proportionally to the amount of data in the task, similar to [1], and train 10 seeds, similar to our ResNet-18 experiments. We use a smaller version of the DeiT architecture [2]. We summarize a few key results below and share anonymized links of new figures:\\n\\n**Comparison of Protocols** \\nWe extend the analysis in Section 4.1 (Figure 3) where we evaluate how well the rankings generated by the different protocols correlate with rankings on task sequences not seen in training. As seen [here](https://ibb.co/ft4zHjr), in the Incremental CIFAR-100 setting with ViTs, Protocol 2 and Protocol 1 are approximately even (within the confidence intervals) in their ability to transfer, and Protocol 3 lags significantly behind both. This is consistent with our previous results in that even in this new setting, there is no advantage to be had by using Protocols 1 and 3.\\n\\n**Performance of Current Methods** \\nWe also extend Figures 2a and 2b with results from Incremental CIFAR-100 experiments. We see when looking at the [performances of individual methods](https://ibb.co/c6Zkm8f) that again, several of our findings from before also hold in this setting. We report the average of the metrics across all tasks. A well tuned Online baseline (with L2 regularization added), can be fairly strong. It essentially matches the best performing methods on training plasticity and outperforms CReLU and Continual Backprop on testing plasticity. Hare and Tortoise continues to be a very good method for testing plasticity, although CReLU seems to underperform on testing plasticity compared to its performance on the other settings/architectures.\\n\\nWe also look at [an extended version of the heatmap in Figure 2b](https://ibb.co/BThrJF5), showing the rank correlation of the method rankings on the different settings and metrics. We find again that for most settings, there isn\\u2019t a strong correlation between the rankings. Interestingly, there is a negative correlation between the method ranks for training plasticity on Incremental CIFAR-100 and testing plasticity on Incremental CIFAR-100, providing further evidence that with many of the benchmarks used to study plasticity, we should focus more on testing plasticity rather than training plasticity.\\n\\n**Training vs Testing Plasticity** \\nFinally, we look directly at the training plasticity and testing plasticity of the different ViT configurations sampled on Incremental CIFAR-100. We look at both the [aggregated](https://ibb.co/JmgzKhf) and [non-aggregated](https://ibb.co/fMNsdHZ) correlations between training and testing plasticity. A similar trend emerges as with the other settings where there is a clear positive trend when looking at all configurations, but when focusing on just the well performing configurations, the positive correlation disappears or even turns into a negative correlation.\\n\\nNote, we also have similar versions of the other figures in our paper (looking at the impact of the number of seeds, tasks, and configurations) with the ViT Incremental CIFAR-100 results included which do not change the messages of our paper, but for brevity\\u2019s sake, we leave them out of this response. They will be included in the final version of the paper. If you have any more concerns, please let us know.\\n\\n[1] Dohare, Shibhansh, et al. \\\"Loss of plasticity in deep continual learning.\\\" Nature 632.8026 (2024): 768-774. \\n[2] Touvron, Hugo, et al. \\\"Training data-efficient image transformers & distillation through attention.\\\" International conference on machine learning. PMLR, 2021.\"}", "{\"comment\": \"I appreciate the time and effort the authors have put into the rebuttal. It has improved the clarity of the paper, and it addresses most of the issues I raised.\\n\\nI read other reviews, and I think the most significant remaining issue for the paper is the choice of benchmarks. I realize that these benchmarks are commonly used in the literature and that the noisy label benchmark is realistic. However, the paper has two unrealistic benchmarks: permuted inputs and shuffled labels. I understand that having a simple benchmark is good for understanding the problem. But I think we as a field should start moving towards more realisitic benchmarks and networks. Given that this paper aims to give guidelines on conducting experiments on non-stationary optimization, I think it should move the field towards more realistic settings. I suggest that the authors replace the permuted input benchmark with class-incremental CIFAR-100. And if you're open to it, include some non-stationary RL environments, like the one used in Figure 3 by Dohare et al. 2024. And I'd suggest including vision transformers in the experiments. To save compute, having ResNet for the Noisy label experiment and ViT for Class-incremental experiments would be okay. \\n\\nBecause of this issue, I'd not recommend this paper for acceptance at this moment. However, I strongly encourage the authors to change the benchmarks and networks and submit the work to the next venue. Perhaps the authors should consider submitting this work to a Journal as this type of work requires a lot of back and forth with the reviewers and large changes in the paper, which are not possible in the timeline of conferences but possible in a Journal.\"}", "{\"comment\": \"We thank the reviewer for their comments. We are glad that the reviewer finds our work novel, timely, and useful for the plasticity community both in terms of what we should focus on and in terms of helping make efficient experiments. We address the reviewers\\u2019 questions and concerns below.\\n\\n**[Fixing the writing errors]** \\nWe thank the reviewer for pointing out the writing errors in our manuscript. We have gone through and made the corrections.\\n\\n**[Permuted CIFAR experiment with ResNet]** \\nWhile we agree that permuted CIFAR-100 with ResNet is not typically seen in the community and that it would seem ResNets would not be well suited for the task, we chose to include them because we saw the networks did seem to be able to achieve perfect training accuracy and a nontrivial test accuracy. The permuted input task is already a very artificial one, only used to probe the abilities of continual learning methods. Despite this, there is still some structure there that ResNets are able to learn. If the reviewer strongly feels that this experiment detracts from the paper, we can do so, but we would like to emphasize that doing so would not affect our claims and analysis.\\n\\n**[Figure 2 explanation]** \\nIn Figure 2, we perform Protocol 2 for model selection and evaluation. We use half the seeds to find the best hyperparameter configuration for each model, and the other half to evaluate the best configurations and create a ranking of the methods. We show the ranks of the methods for each of the settings in our paper. The left plot shows the method rankings when selecting/evaluating for best training accuracy, and the right plot shows the rankings when selecting/evaluating for best testing accuracy. In the updated manuscript, we have moved this Figure to the appendix (Figure 14), and replaced it with a plot showing the actual values of the train and test accuracies used to generate the rankings, and a plot showing how much the different rankings correlate with each other.\\n\\n**[Updating the claim in Figure 4 about train and test accuracy not correlating with each other]** \\nThe reviewer is correct that we need to be more precise about this claim. What we meant to convey is that trainability correlates with generalizability *only up to a point*, after which continuing to improve trainability does not end up correlating with the end goal of improving model performance. This suggests that (1) for the types of settings presented in this study (which are representative of what is currently studied in the literature), we should shift our focus from improving trainability to the problem of improving generalizability. (2) Studying trainability could still be a valuable problem, but we should find harder settings to do so.\\n\\nWe have updated the discussion of this claim (in Section 4.3) in the paper and thank the reviewer for the valuable suggestion.\\n\\n**[Clarification on \\u201cHow many tasks do you need to include in your training sequences?\\u201d]** \\nYes, we meant how many tasks do you need to include when doing hyperparameter selection. This has been updated in the manuscript.\\n\\n**[Legend for Figure 3]** \\nWe have updated Figure 3 to be a bar plot and added the missing legend. We also added Protocol 3 to the evaluation. Each bar represents the performance of one of the protocols described in Section 3.3. Thank you for pointing out the missing legend.\\n\\n**[Figure 4 with loss instead of accuracy]** \\nWe have added this Figure to the Appendix (Figure 12). While some of the trends are different, there are still only two settings that have a statistically significant positive correlation between train and test accuracy for the top configurations. Thank you for the suggestion.\\n\\n**[X axis label on Figure 5d]** \\nYes, sorry for the mixup. We have updated the label.\\n\\nWe thank the reviewer for their valuable comments and helping improve our work. If we have addressed your questions and concerns, we would appreciate it if you could further support our work by increasing your score. If there are more questions/concerns, please let us know.\"}", "{\"comment\": \"Thank you for your response. We are glad you appreciate the improvements in the quality of the paper and that you feel that we have addressed most of your concerns. We also would like to let you know that we have finished running preliminary searches with CReLU with the network size adjusted to maintain the same number of parameters. For time constraints, we ran a search with 30 configurations instead of 40 configurations. We found that the MLP results are approximately the same, with a slight decrease in performance in Permuted CIFAR-10 training accuracy and Shuffled CIFAR-10 test accuracy. For ResNet, the training performance is matched by the smaller network, but the test performance is significantly lower across all settings. You can see the full results in Appendix D.6.\", \"we_address_your_remaining_concerns_below\": \"**[Unrealistic benchmarks]** \\nWe have launched experiments for Continual Imagenet to extend our study, however, we would like to point out that the benchmarks used in our study are very similar to the types of benchmarks that are used in the literature. The reviewer mentions Dohare et al, 2024 as a source of realistic benchmarks. The three continual supervised learning benchmarks used in Dohare et al, 2024 are (1) Continual Imagenet: in this benchmark, the network faces a series of binary classification tasks between two Imagenet classes. The images are scaled down from the typical Imagenet size to 32x32 pixels, and after every task, the last layer is reset. We argue that our settings are at the very least as challenging as this setting, given our tasks have more classes and we don\\u2019t do partial resets between tasks. (2) Class incremental CIFAR-100: In this setting, the network gets access to data from more and more classes over time. While it is a different type of distribution shift, we\\u2019d argue that our CIFAR-100 experiments are at a similar level of difficulty. The noisy CIFAR-100 experiment gives access to different parts of the dataset over time (not accumulating the data like class incremental CIFAR-100), and changes the noisiness in the label space over time. (3) Permuted Input MNIST: This is simply a less challenging version of our Permuted CIFAR-10 and Permuted CIFAR-100 experiments. \\n\\nWhile we appreciate the need for more realistic benchmarks, we believe that the settings included in our study are at a comparable difficulty level to what has been used in the literature for plasticity research, both in Dohare et al, but also several of the other citations provided above. Our goal with this paper is not necessarily to propose realistic continual learning settings, but to provide recommendations to researchers working in plasticity research, which does not necessarily happen in realistic continual learning settings.\\n\\n**[Performance of Online baseline]** \\nWe want to emphasize that our online baseline (as well as all the methods other than L2-Init) also has L2-regularization added, as a basic defense against loss of plasticity. Our reasoning for this was that (1) L2-regularizatiton is a ubiquitous technique in deep learning at this point (2) previous works have shown that an increase in weight norm essentially always leads to loss of plasticity. Given the simple and ubiquitous nature of L2 regularization, we made some form of L2 regularization default for all methods. This change makes a well tuned online baseline significantly stronger, which we believe is still a useful thing to point out to the community about the methods being proposed. If they cannot outperform simple L2 regularization, that should be noted.\\n\\nWe hope that our responses and additional experiments have addressed your concerns. If so, we\\u2019d greatly appreciate it if you would increase your score.\"}", "{\"comment\": \"Thank you for updating the figures and fixing the grammar. The overall writing looks better.\", \"additional_questions\": [\"`Doing well in a setting like Protocol 3 is necessary to create \\u201clifelong\\u201d agents that have unbounded lifetimes` \\\\\", \"I agree with the reviewer EYTk, the current experiments are insufficient to claim that the protocol 3 is **necessary** to create a true lifelong agent. More experiments need to be added to investigate the properties of the a lifelong agent and how the protocol 3 can fulfill those properties. If the author can address this question, I can raise my score.\", \"For the training accuracy of the permuted cifar-100 in Fig 2. In the revision, it seems like all of the methods reached 100% accuracy where the original version is not?\", \"Same Fig 2. The legend should not explain why but what's the figure.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nSince this is the last day where reviewers are allowed to communicate with the authors, we kindly ask you to let us know whether our additional results and changes have addressed your concerns, and if so, to please adjust your score accordingly.\\n\\nThank you, \\nThe Authors\"}", "{\"comment\": \"Hello, as the discussion period is ending soon, we would like to hear your thoughts about our updates and our response to your review. If we have addressed your concerns, we would appreciate it if you could further support our work by increasing your score.\"}", "{\"comment\": \"I appreciate the changes to the manuscript. In particular, the increased focus on the three key takeaways in the last section helps highlight the contribution made by the submission. I will update my score accordingly (5->6) and support acceptance.\"}", "{\"summary\": \"The paper conducts several experiments on selecting hyperparameters from the previous literatures for nonstationary optimization and demonstrates that using multiple streams of tasks for hyperparameters selection is the best approach among the commonly used protocols.\\nIt also gives insight on finding configurations with good performance under low resource budget.\\nIn addition, it shows that maintaining the training accuracy does not relate to a better generalizability in nonstationary optimization settings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Provides an evidence that, there is no correlation between trainability and generalizability.\", \"Shows large number of seeds are not necessary for finding good hyperparameters.\", \"Provides direction on resource-constrained experiments.\"], \"weaknesses\": [\"The paper is not well written\", \"Line 142: Duplicate of 'the added'. Same line, duplicate of 'be' in 'Should be actually be used'\", \"Line 153, there is no obvious connection between this and the next sentence.\", \"Line 283: 'used for selection' which I think you are referring to 'used for evaluation' instead?\", \"Line 375: 'HOW MANY SEEDS DO YOU TO EVALUATE A METHOD', miss a 'need'?\", \"Line 722: Missing the batch size for resnet-18.\", \"Figure 2 is hard to interpret. One possible improvement is to add different line shapes.\", \"Figure 3 does not have the legend.\", \"Figure 4's line is hard to differentiate the methods. Same as figure 2, maybe add different line shape.\", \"The protocol 3 is considered to be critical to do lifelong learning. But there is no comparison between this protocol and the other protocols to prove that statement.\"], \"questions\": \"Why is the number of seeds and tasks different in different type of networks, same for the gradient steps?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors study the experimental design in non-stationary optimization. They explore various aspects of experiment design, from hyper-parameter selection protocols to the number of seeds needed for hyper-parameter selection. The paper contains some interesting results about hyper-parameter selection, number of seeds, experiment protocols, etc.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This work is novel because there has not been any explicit focus on empirical design in plasticity research. It could be an important contribution to the community as it could speed up progress and provide a more unified focus for the field.\\n\\nThe studies about hyper-parameter selection protocol are useful as they could help develop methods that overfit less.\\n\\nResults show that the community needs to focus on test plasticity, which is interesting and needs to be evaluated further.\\n\\nThe study about the number of seeds needed for hyper-parameter selection and evaluation of a method is good, and it could save computation on many future experiments.\", \"weaknesses\": \"The paper is not well-written and contains a lot of small errors, which reduces my confidence in the quality of the paper and the results presented therein. For example, there are no labels in Figure 3, and line 424 says \\\"Figure 5b and 5b\\\", a typo in the title of section 4.4, among many others. I suggest the authors spend some more time on writing (maybe by using Grammarly) and making sure the presentation is up to the mark.\\n\\nThe experiments with ResNet and permuted CIFAR are not useful. When inputs are permuted, all the spatial information in the image is lost. In such a case, convolutional networks like ResNet-18 are not useful. These experiments have not been done in the community either, people have only used feed-forward networks with Permuted Input experiments. The ResNet experiments on Permuted CIFAR-100 should be removed from the paper.\\n\\nWhat is plotted in Figure 2? Is it the performance of the best hyper-parameter configuration or the average performance across hyper-parameters? And what does \\\"best\\\" mean? Highest training accuracy or test accuracy or train for Figure 2a and test for Figure 2b? \\n\\nThe results presented in Figure 4 are used to argue that \\\"improving training does not end up correlating with ... improving model performance\\\" (lines 370-371). But that is not what the figure shows. Figure 4a clearly shows that there is a positive correlation between training accuracy and test accuracy. What Figure 4b shows is that for the best hyper-parameters, there is a weak or no correlation between the two. That just means that after a point, trainability does not improve generalizability. But that does not mean \\\"improving training does not ... correlating ... improving model performance\\\". The claim on lines 370-371 and the 3rd bullet point in section 5 need to be changed. \\n\\nIn the introduction and Section 4.5, the paper asks, \\\"How many tasks do you need to include in your training sequences?\\\" The answer should be infinity because we are in a lifelong learning setting. Do the authors mean, \\\"How many tasks do you need to include in your training sequences **for tuning hyper-parameters**?\\\" or does that statement mean something else?\\n\\n\\nThis paper can be a good contribution to the community, but it is not up to the mark yet. I'm willing to increase my score if the authors address my concerns.\", \"questions\": \"What are the labels for both lines in Figure 3?\\n\\nWhat happens if you plot training loss and test loss in Figure 4? I suspect that could reveal more correlation. Particularly for ResNet in Figure 4b, as it gets to 100% train accuracy in most cases. \\n\\nIn Figure 5d, do you mean p(Method ranking **does not** change) instead of p(Method ranking changes)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. We are glad you appreciate the improvements in the clarity of the paper. We have also updated the paper and highlighted major text changes from the original submission and new figures in blue to more easily spot the differences.\\n\\n**[Presentation of takeaways]** \\nWe have reorganized our question list in the Introduction and our takeaways list in the Discussion. Specifically, we group the discussion around the number of seeds, tasks, and configurations into a single discussion of how to design resource efficient hyperparameter searches. The discussions around the different hyperparameter search protocols and training vs testing plasticity remain as their own paragraphs. Thank you for the suggestion on how to give our conclusions a better focus, and please let us know if you have more comments.\\n\\n**[Non aggregated results takeaways]** \\nUnfortunately, the nature of the non aggregated results is that there are quite a few methods being tested on several settings, and thus there is a lot of information to showcase which is difficult to fully describe in text. We think that some of the most concrete takeaways from the analyses are (1) from Figure 10b, for essentially all methods on all settings except for Online on Noisy CIFAR-100, the hyperparameter configuration selected will be a top-3 configuration even when using 1 or 2 seeds (2) from Figure 10c, the best configuration will be evaluated as a top 3 configuration in the search with very few seeds for every case except Noisy CIFAR-100 and Online on Permuted CIFAR-10 (3) Figures 11a and 11b show approximately the ordering of how sensitive the configuration rankings are for different methods to the number of seeds used in model selection. Although there is a lot of overlap, you can still see some separation between methods (4) Figure 11c shows the approximate range expected for the reported test accuracy of a method given a certain number of seeds used for selection. For most methods on most settings, there isn\\u2019t a very large range even with a small number of seeds, but you can see a few methods with large ranges for small numbers of seeds that need more seeds to narrow the range (5) Figure 12 shows that there isn\\u2019t much difference between methods for the expected improvement per extra configuration or range of expected final test accuracy (6) Figure 16b and Table 3 show that for most methods across all settings, when focusing on the top configurations for each method, there isn\\u2019t a statistically significant positive correlation between train and test accuracy. In fact, the only place where such a relationship exists is for ReDo with ResNets on Noisy CIFAR-100 (the other statistically significant positive correlations in the table are essentially vertical lines that are not well defined correlations). We have updated the manuscript and highlighted the text with these discussions.\\n\\nWe again thank the reviewer for their valuable comments and helping improve our work. If there are more questions/concerns, please let us know. If we have addressed your questions and concerns, we would appreciate it if you could further support our work by increasing your score.\"}" ] }
54jmXCHrTY
Understanding Self-supervised Learning as an Approximation of Supervised Learning
[ "Byeongchan Lee" ]
Self-supervised representation learning has mainly advanced in an empirical rather than theoretical manner. Many successful algorithms combine multiple techniques that are supported by experiments. This approach makes it difficult for the community to understand self-supervised learning fundamentally. To help settle this situation, we take a principled approach. We theoretically formulate a self-supervised learning problem as an approximation of a supervised learning problem. From the formulated problem, we derive a loss that is closely related to existing contrastive losses, thereby providing a foundation for these losses. The concepts of prototype representation bias and balanced contrastive loss are naturally introduced in the derivation, which provide insights to help understand self-supervised learning. We discuss how components of our framework align with practices of self-supervised learning algorithms, focusing on SimCLR. We also investigate the impact of balancing the attracting force between positive pairs and the repelling force between negative pairs. The proofs of our theorems are provided in the appendix, and the code to reproduce experimental results is provided in the supplementary material.
[ "representation learning", "self-supervised learning", "contrastive learning", "theoretical framework" ]
Reject
https://openreview.net/pdf?id=54jmXCHrTY
https://openreview.net/forum?id=54jmXCHrTY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wmmE8jCN8o", "wU2BYY2LjD", "uBJ7zOZ0On", "sAkyJBKNuk", "rtn0yIxsko", "q9z4R6MQBa", "pf0jVx7Jvy", "fTR8qXOsOM", "e34fkz3Inp", "dquL9efGYM", "Y4kxwFbIJm", "Vb8SI0sMHf", "VZP6I2dRef", "Qnm6cD0ncL", "K7mYtAexnV", "JcIRabuStF", "HwNrzYlW0A", "GL5Oeaodhx", "Epk3Ggk5p6", "ABb7PX5OVG", "9coJSQi1Xr", "9JVqfuNCaf", "65l3iH6MYi" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732508567352, 1732508788184, 1732508663180, 1737523715605, 1732116222172, 1732113806104, 1734834460635, 1730267772210, 1733152974248, 1732595494708, 1732721309143, 1732115787322, 1732508740508, 1730675267283, 1732771126659, 1733060071760, 1732114470936, 1731221298301, 1732585550118, 1730907220752, 1733227012464, 1732114874249, 1732116411167 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Area_Chair_gTUP" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_tUzQ" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_Cr2E" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_JRCA" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_Cr2E" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_NNEK" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_tUzQ" ], [ "ICLR.cc/2025/Conference/Submission5603/Reviewer_JRCA" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ], [ "ICLR.cc/2025/Conference/Submission5603/Authors" ] ], "structured_content_str": [ "{\"title\": \"Gentle reminder: The interactive discussion period will end in less than two days\", \"comment\": \"Dear Reviewer NNEK,\\n\\nThank you again for your time and efforts in reviewing our paper.\\n\\nAs the discussion period draws close, we kindly remind you that two days remain for further comments or questions. \\nWe would appreciate the opportunity to address any additional concerns you may have before the discussion phase ends.\\n\\nThank you very much. \\n\\nBest regards, \\nAuthors.\"}", "{\"title\": \"Gentle reminder: The interactive discussion period will end in less than two days\", \"comment\": \"Dear Reviewer tUzQ,\\n\\nThank you again for your time and efforts in reviewing our paper.\\n\\nAs the discussion period draws close, we kindly remind you that two days remain for further comments or questions. \\nWe would appreciate the opportunity to address any additional concerns you may have before the discussion phase ends.\\n\\nThank you very much. \\n\\nBest regards, \\nAuthors.\"}", "{\"title\": \"Gentle reminder: The interactive discussion period will end in less than two days\", \"comment\": \"Dear Reviewer JRCA,\\n\\nThank you again for your time and efforts in reviewing our paper.\\n\\nAs the discussion period draws close, we kindly remind you that two days remain for further comments or questions. \\nWe would appreciate the opportunity to address any additional concerns you may have before the discussion phase ends.\\n\\nThank you very much. \\n\\nBest regards, \\nAuthors.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer tUzQ,\\n\\nWe sincerely appreciate the valuable time and effort you have dedicated to reviewing our manuscript. We address each of your questions and concerns individually below. Please let us know if there are any comments or concerns we have not adequately addressed.\\n\\n---\\n\\n**[W1 & Q2] it seems to me that there is significant overlap with [a]**\\n\\nIn Section 2, we address [a]. [a] and our paper differ in terms of approach, objective, and theoretical setting:\\n\\nIn terms of approach, [a] builds on an established contrastive loss (Equation (1) in [a]) that attracts or repels other 'samples'. In contrast, our paper starts from scratch with a formulated supervised objective (Equation (5) in our paper) that attracts or repels 'pseudo-labels' (prototype representations).\\n\\nIn terms of objective, [a] explores how attracting and repelling other samples relates to alignment and uniformity. In contrast, our paper demonstrates how attracting and repelling pseudo-labels can manifest as attracting and repelling other samples. Thus, while [a] focuses on the properties of contrastive loss, we focus on the foundation of contrastive loss as an approximation of supervised learning.\\n\\nAdditionally, in terms of theoretical setting, [a] primarily examines the limiting behavior in an asymptotic setting (Theorem 1 in [a]) where the number of negative samples approaches infinity. On the other hand, our approach addresses a more general setting (Theorem 1 and 2 in our paper), offering broader applicability.\\n\\n---\\n\\n**[W2 & Q1] What is the relationship between these two upper bounds and the supervised learning paradigm?**\\n\\nThe loss in self-supervised learning is shown to serve as an upper bound for a supervised learning objective. Specifically, starting from the problem we formulated and assuming common practices in self-supervised learning, interestingly, a generalized form of the InfoNCE loss\\u2014commonly used in self-supervised learning\\u2014emerges as an upper bound. This provides a perspective that views self-supervised learning as an approximation of supervised learning. It offers insights into the type of optimization problem that self-supervised learning is solving. The concepts of prototype representation bias and balanced contrastive loss that arise in this process can be valuable for understanding and enhancing self-supervised learning.\\n\\n---\\n\\n**[W3] the proposed new SSL format does not seem to have significant advantages**\\n\\nThe accuracy of the standard NT-Xent loss (a special case of the generalized NT-Xent loss with $\\\\lambda=1$) is 65.98%, while the accuracy of the balanced contrastive loss is 67.40%, showing a gap of 1.42%. Considering that the chance-level accuracy for ImageNet, which consists of 1,000 classes, is merely 0.1%, achieving this level of improvement solely through proper balancing is significant. \\n\\nThe purpose of the experiments in this section is to check the validity of the theory. To this end, we focus on examining whether a more balanced contrastive loss, as predicted by the theory, effectively enhances performance. To ensure thoroughness, we conduct a comprehensive set of experiments across a parameter grid.\\n\\n---\\n\\n**[W4] there are many related works (like [b-c]) on the understanding of SSL**\\n\\nGiven the extensive body of work related to self-supervised learning, we had to selectively discuss relevant studies. As mentioned in Section 2, our work falls into the category of contrastive learning. However, [b] addresses non-contrastive learning. Therefore, non-contrastive learning is discussed in Section 5.1, focusing on major algorithms with references. [c] proposes a practical method that leverages video datasets to learn invariance when the dataset is not object-centric. However, we aimed to keep the setting streamlined to avoid complicating the theoretical analysis. In the revised manuscript, we have included the papers in appropriate sections.\"}", "{\"comment\": \"Dear Reviewer NNEK,\\n\\nWe sincerely appreciate the time you have taken to review our manuscript. We address your concern below. Please let us know if there are any comments or concerns we have not adequately addressed.\\n\\n---\\n\\n**[W1] Using the prototype representation bias, as defined by the authors, to represent the gap between SSL and SL is overly simplistic. In reality, no practical augmentation can achieve a very low prototype representation bias unless label information is available.**\\n\\nThe purpose of this paper is to provide a theoretical understanding rather than focusing on practical applications. In this context, the concept of prototype representation bias is introduced to better understand the connection between supervised and self-supervised learning. In reality, there are bound to be some limitations in controlling prototype representation bias through data augmentation. This is an inherent limitation of self-supervised learning itself, which must utilize data augmentation derived from domain knowledge because there are no labels. However, assessing the practicality of a newly proposed concept is not straightforward. Ideas inspired by the concept of prototype representation bias may lead to new algorithms in the future.\\n\\nAdditionally, in this framework, all other parts except the part where $\\\\\\\\mathbb{E}\\\\_{T, X | y} f_{\\\\\\\\theta}(T(X))$ is approximated by $\\\\\\\\mathbb{E}\\\\_{T}f_{\\\\\\\\theta}(T(x))$ (with available images) are proven rigorously (Note that in the repelling component, even this approximation is not used). In the attracting component, the approximation relies on 1) the intuition that it is natural and 2) the tendency shown in the experiment. With the supervised objective defined this way, the math works seamlessly, and the InfoNCE loss, widely used in self-supervised learning, naturally emerges. Bridging the gap between supervised and self-supervised learning in a principled way is a necessary piece that is missing in the literature. In machine learning, we believe that it is a long-standing tradition to value a theory when it is reasonably solid and its limitations are properly acknowledged.\"}", "{\"metareview\": \"This work addresses the largely empirical nature of self-supervised learning (SSL) by offering a principled, theoretical framework. The authors formulate self-supervised learning as an approximation of a supervised learning problem, deriving a loss closely related to contrastive losses, thus providing a theoretical foundation for them. Key concepts such as prototype representation bias and balanced contrastive loss emerge naturally, offering insights into self-supervised learning. The framework is aligned with established practices, particularly focusing on SimCLR, and explores the balance between attracting positive pairs and repelling negative pairs. The reviewers raised several concerns, including: (1) the practicality of the prototype representation bias and the insufficient theoretical contributions, (2) the limited insights gained from the connection between self-supervised learning and supervised learning as presented in this paper, and (3) the lack of sufficient experimental validation for the proposed new SSL method. Despite the authors' rebuttal and subsequent author-reviewer discussions, the paper did not receive enough support. Therefore, I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"After the rebuttal, only Reviewer Cr2E confirmed that all concerns were addressed and expressed support for our work.\\n\\nReviewer NNEK raised concerns about the practicality of the prototype representation bias, stating, \\u201cI think the main issue with this paper is its insufficient theoretical contribution. Using the prototype representation bias, as defined by the authors, to represent the gap between SSL and SL is overly simplistic.\\u201d In the authors' response, they emphasized that their work focuses on theoretical understanding and that conclusions about the newly proposed concept may be premature. However, if something is premature, we should exercise caution. I agree with the reviewer's comment, and I believe the authors' response did not adequately address this concern.\\n\\nReviewer JRCA mentioned, \\u201cThe major concern being limited significance of the consequences that could be gained from a connection to supervised learning, e.g. in terms of theoretical guarantees.\\u201d Reviewer JRCA slightly increased the score to 5.\\n\\nReviewer tUzQ remained concerned that \\u201cthe proposed new SSL format does not seem to have significant advantages. In addition, the authors also do not conduct sufficient verification experiments.\\u201d\\n\\nOverall, I agree with most of the reviewers' evaluations and believe that this work, in its current form, does not meet the standards for publication.\"}", "{\"summary\": \"This paper focuses on understanding self-supervised learning, where the authors theoretically formulate the self-supervised learning problem as an approximation of a supervised learning problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized, with clear subsections that logically flow from theoretical foundations to empirical validations.\\n2. The mathematical derivations and proofs in this paper seem appropriate.\", \"weaknesses\": \"1. Although the authors claim to propose a new perspective for understanding SSL, it seems to me that there is significant overlap with [a]. Unfortunately, the authors do not provide a detailed analysis of the similarities and differences between the two.\\n2. The correlation between theoretical analysis and insights in this article is weak: In the theoretical analysis section, only the proof of two upper bounds is provided. My question is: What is the relationship between these two upper bounds and the supervised learning paradigm? In other words, what is the significance of their insights for the paper?\\n3. In Section 6, the proposed new SSL format does not seem to have significant advantages. In addition, the authors also do not conduct sufficient verification experiments.\\n4. In fact, there are many related works (like [b-c]) on the understanding of SSL, unfortunately, this paper does not provide a detailed discussion and analysis of their differences.\", \"references\": \"[a] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International conference on machine learning, pp. 9929\\u20139939. PMLR, 2020.\\n[b] Tian Y, Chen X, Ganguli S. Understanding self-supervised learning dynamics without contrastive pairs[C]//International Conference on Machine Learning. PMLR, 2021: 10268-10278.\\n[c] Purushwalkam S, Gupta A. Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases[J]. Advances in Neural Information Processing Systems, 2020, 33: 3407-3418.\", \"questions\": \"1. In the theoretical analysis section, only the proof of two upper bounds is provided. My question is: What is the relationship between these two upper bounds and the supervised learning paradigm? In other words, what is the significance of their insights for the paper?\\n2. What is the significance of self-supervised learning from this perspective? At a high level, beyond some conclusions that are very similar to [a], it is difficult to capture additional information.\", \"reference\": \"[a] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International conference on machine learning, pp. 9929\\u20139939. PMLR, 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The reviewer thanks the authors for the response -- they addressed the concerns and the reviewer maintains the positive rating.\"}", "{\"comment\": \"Dear Reviewer tUzQ,\\n\\nWe sincerely appreciate your response and thoughtful feedback. We understand your concerns regarding the sufficiency of the validation experiments and would like to provide some clarification.\\n\\nThe transformation of the attraction/repulsion mechanism from pseudo-labels to samples has been fundamentally validated mathematically. To support with experimental validation, we have conducted the following:\\n\\n1. **Prototype Representation Bias**: We performed experiments to investigate the biases that emerge during the transition from supervised to self-supervised learning problems.\\n2. **Balanced Contrastive Loss**: We conducted extensive experiments to evaluate its effectiveness, including pre-training and end-to-end evaluations across all parameter pairs.\\n3. **Key Assumptions**: We ran additional experiments to assess and validate the importance of the core assumptions.\\n\\nIf there are specific additional validation experiments or directions you believe we should explore, we would greatly value your suggestions.\\n\\nThank you once again for your time, effort, and valuable feedback during the review process.\\n\\nBest regards, \\nAuthors.\"}", "{\"title\": \"thank you for response\", \"comment\": \"Dear authors, thank you for answering my questions. I still have my reservations about the proposed interpretation which are not resolved. The major one being limited significance of the consequences that could be gained from a connection to supervised learning, e.g. in terms of theoretical guarantees. The submission covers insights that are already established in practice. Thus while the proposed connection is itself interesting, the consequences specified have limited significance. After reading other reviews and to reflect authors efforts during discussion period, I slightly raised my score.\"}", "{\"comment\": \"Dear Reviewer Cr2E,\\n\\nWe deeply appreciate your time and effort to review our manuscript. We address each of your questions and concerns individually below. Please let us know if there are any comments or concerns we have not adequately addressed.\\n\\n---\\n\\n**[W1] this choice may vary substantially across datasets and problem domains**\\n\\nWe agree that the choice of data augmentation should vary depending on the application. The fundamental spirit of self-supervised learning lies in leveraging domain knowledge about the application when labels are not available. Specifically, it involves utilizing knowledge about what kinds of transformation invariance the desired representation should possess for a given application. Our work focuses on grounding contrastive losses under the assumption that such data augmentation is already provided. In reality, controlling prototype representations using only domain knowledge is challenging. Exploring which data augmentation techniques should be tailored to each application will be an interesting future direction.\\n\\n---\\n\\n**[W2] it provides limited insights into how these parameters impact performance across diverse datasets**\\n\\nOur work focuses on theoretical understanding, and the experiments were designed to align with this purpose. The purpose of the experiments is to check the validity of our theory by demonstrating the potential performance improvements predicted by the theory through the derived balanced contrastive loss. Therefore, we selected canonical datasets such as ImageNet and CIFAR-10 and focused on providing complete results across a grid of parameters for alpha and lambda.\\n\\n---\\n\\n**[W3] there are alternative frameworks in self-supervised learning**\\n\\nThose algorithms are not theoretical frameworks, so their ideas are expressed intuitively, but they can be discussed as follows:\\nComparing self-supervised learning to bootstrapping stems from the process of constructing pseudo-labels using only representations without external input. This aspect can be connected to our framework, where surrogate prototype representations are generated using representations and treated as pseudo-labels. The predictor used here can be viewed as an additional module designed to predict these pseudo-labels. This algorithm falls under non-contrastive learning, which is discussed in Section 5.1.\\n\\nAdditionally, in clustering algorithms, the cluster assignments of transformed images are made consistent. This can be interpreted within our framework as guiding the representations of transformed images to converge toward a single prototype representation, thereby assigning them to the same cluster. Following the suggestion, we added a discussion section to the paper to address this topic. Thank you very much for your suggestion.\\n\\n---\\n\\n**[W4] The paper\\u2019s findings, especially around balancing attraction and repulsion forces, suggest potential for optimization.**\\n\\nFor example, one could consider an algorithm that adjusts alpha and lambda dynamically rather than treating them as fixed hyperparameters. Our framework enhances the understanding of the roles of alpha and lambda: alpha reflects hedging the risk with multiple negative samples, while lambda adjusts the relative magnitudes of the attracting and repelling forces. By leveraging this intuition, dynamically adjusting alpha and lambda based on the overall distribution of representations could potentially improve the learning process.\\n\\n---\\n\\n**[Q1] How would the assumptions, such as balanced data and the specific choice of cosine similarity, affect the generalizability of this framework to domains**\\n\\nIn proving our theorems, we rely on certain assumptions that are common practices in self-supervised learning, such as balanced datasets and cosine similarity. As a result, generalization becomes less straightforward in scenarios where these assumptions are violated. However, by understanding the role these assumptions play at different stages of the proof, we may pave the way for the development of more generalized algorithms in the future.\\n\\n---\\n\\n**[Q2] To what extent can prototype representation bias be quantitatively minimized through practical data augmentation strategies?**\\n\\nAccording to our framework, the following idea can be considered: data augmentation methods that merely apply color distortions or Gaussian blur may struggle to adequately cover images with the same label in the augmented image space (Figure 2 in our paper). Data augmentation leveraging generative AI may offer an alternative.\\n\\n---\\n\\n**[Q3] Could the theoretical framework be adapted or extended to include asymmetrical architectures, given their prominence in modern self-supervised learning algorithms?**\\n\\nWe discuss the asymmetric architecture in Section 5.1.\"}", "{\"title\": \"Gentle reminder: The interactive discussion period will end in less than two days\", \"comment\": \"Dear Reviewer Cr2E,\\n\\nThank you again for your time and efforts in reviewing our paper.\\n\\nAs the discussion period draws close, we kindly remind you that two days remain for further comments or questions. \\nWe would appreciate the opportunity to address any additional concerns you may have before the discussion phase ends.\\n\\nThank you very much. \\n\\nBest regards, \\nAuthors.\"}", "{\"summary\": \"This paper theoretically models self-supervised learning as an approximation to supervised learning. The authors derive a self-supervised loss related to contrastive losses, including InfoNCE, while introducing concepts like prototype representation bias and balanced contrastive loss. They apply the framework to analyze components of self-supervised learning, notably SimCLR, and explore the effects of balancing attraction and repulsion forces.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1- The paper's primary strength lies in its establishment of a rigorous theoretical framework that bridges supervised and self-supervised learning, addressing a significant gap in the literature by grounding widely-used contrastive losses in theory. This approach contributes to the field by potentially enhancing the interpretability and rationale behind existing self-supervised methods.\\n\\n2- The authors derive a self-supervised learning loss function from first principles, aligning it with established methods like the NT-Xent loss in SimCLR. This derivation offers the self-supervised learning community a deeper understanding of why and how particular loss functions, often implemented heuristically, are effective.\\n\\n3- Introducing the concept of prototype representation bias, the paper reveals how self-supervised learning can be systematically evaluated and potentially optimized by minimizing this bias through data augmentation strategies. This is an innovative step that contextualizes the role of representation clustering within the self-supervised paradigm.\", \"weaknesses\": \"1- The authors define a surrogate prototype representation based on transformations (augmentations) of the same data point, but this choice may vary substantially across datasets and problem domains. Since many real-world applications use domain-specific augmentations (e.g., color transformations for medical images), the theoretical guarantees provided may not hold uniformly. A sensitivity analysis or empirical study on the effects of diverse augmentation choices would strengthen the validity of the surrogate representation assumptions.\\n\\n2- The paper introduces parameters (e.g., balancing factors like \\n\\\\alpha and \\\\lambda Equation 12) that govern the relative strengths of attraction and repulsion forces in the derived loss function. However, it provides limited insights into how these parameters impact performance across diverse datasets and tasks. A deeper empirical analysis or sensitivity study on these parameters would make the findings more robust and practically usable. Additionally, discussing guidelines for optimal parameter selection based on dataset characteristics would improve the utility of the paper for practitioners.\\n\\n3- The paper situates itself within the context of contrastive learning methods, particularly NT-Xent and InfoNCE losses. However, there are alternative frameworks in self-supervised learning, such as clustering-based approaches (e.g., DeepCluster, SwAV) and bootstrapping methods (e.g., BYOL). While the authors mention these methods briefly, they do not provide a clear comparison or discussion of how their theoretical framework might align or diverge from these alternative approaches. Providing such a comparison could position the framework more effectively within the larger self-supervised landscape.\\n\\n4- The paper\\u2019s findings, especially around balancing attraction and repulsion forces, suggest potential for optimization. Yet, there is minimal exploration of how the theoretical insights could inspire specific algorithmic modifications or optimizations for contrastive learning. For example, insights into prototype bias could be used to dynamically adjust the loss during training. Discussing these possibilities would improve the paper's impact by suggesting actionable ways to leverage its contributions.\", \"questions\": \"1- How would the assumptions, such as balanced data and the specific choice of cosine similarity, affect the generalizability of this framework to domains with significant class imbalance or non-standard data representations?\\n\\n2- To what extent can prototype representation bias be quantitatively minimized through practical data augmentation strategies? Would further analysis on this bias's impact across different datasets yield consistent trends?\\n\\n3- Could the theoretical framework be adapted or extended to include asymmetrical architectures, given their prominence in modern self-supervised learning algorithms? What additional assumptions might be required?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer JRCA,\\n\\nThank you for your thoughtful feedback and for taking the time to reconsider your evaluation during the discussion period. We deeply appreciate your engagement.\\n\\nWe would like to address your concerns regarding the significance of the consequences derived from our proposed connection to supervised learning.\\n\\n1. **Theoretical contribution** Self-supervised learning implies the idea of constructing pseudo-labels from samples. However, when we see self-supervised learning losses composed solely of samples, it is not immediately apparent how they relate to pseudo-labels.\\nIn this work, what we mathematically showed is that **pseudo-label attraction/repulsion** (Equation (7)), i.e.,\\n$$\\n-s\\\\\\\\left(f\\\\_{\\\\\\\\theta}(t(x)), \\\\\\\\hat{\\\\\\\\mu}\\\\_{y}\\\\\\\\right) + \\\\\\\\lambda \\\\\\\\max\\\\_{y' \\\\\\\\neq y} s\\\\\\\\left(f\\\\_{\\\\\\\\theta}(t(x)), \\\\\\\\hat{\\\\\\\\mu}\\\\_{y'}\\\\\\\\right),\\n$$\\nwhere $\\\\\\\\hat{\\\\\\\\mu}\\\\_{y} := \\\\\\\\mathbb{E}\\\\_{T}f\\\\_{\\\\\\\\theta}(T(x))$ and $\\\\\\\\hat{\\\\\\\\mu}\\\\_{y'} := \\\\\\\\mathbb{E}\\\\_{T', X' \\\\\\\\vert y'}f\\\\_{\\\\\\\\theta}(T'(X'))$, can be optimized as **sample attraction/repulsion** (Equation (13)), i.e.,\\n$$\\n-\\\\\\\\log\\\\\\\\frac{\\\\\\\\exp(\\\\\\\\alpha s\\\\\\\\left(f\\\\_{\\\\\\\\theta}(t(x)), f\\\\_{\\\\\\\\theta}(t'(x))\\\\\\\\right))}{\\\\\\\\left( \\\\\\\\sum\\\\_{x' \\\\\\\\in \\\\\\\\hat{\\\\mathcal{X}}} \\\\\\\\exp(\\\\\\\\alpha s(f\\\\_{\\\\\\\\theta}(t(x)), f\\\\_{\\\\\\\\theta}(t'(x')))) \\\\\\\\right)^{\\\\\\\\lambda / \\\\\\\\nu}},\\n$$\\nwhich generalizes the widely-used InfoNCE-type losses. This connection contributes to the firm foundation of self-supervised learning by addressing a crucial gap in the literature. To clarify this, we have updated the contents of Section 3.2, temporarily highlighted in \\\"blue\\\". We believe that technology built on a shaky foundation can be difficult to trust and may encounter limitations in long-term development.\\n\\n2. **Practical contribution**: While it is true that our submission builds upon some established practices, our contribution lies in unifying those practices into a cohesive framework. In addition, to be considered a good theory, we show not only internal consistency but also a reasonable degree of predictive power. In this regard, we provide evidence that leveraging the prototype representation bias and the balanced contrastive loss emerging from our theoretical development can lead to performance improvements. This can guide practitioners toward lines of research on how to reduce the bias in prototype representations or how to effectively balance contrastive losses.\\n\\nIn summary, we believe that our work will benefit the self-supervised learning community by serving as a basis and providing guidance for research.\\n\\nWe hope this clarifies the broader importance of our work. We value your constructive feedback, which encourages us to continue refining and communicating the implications of our findings effectively.\\nThank you once again for your time and consideration.\\n\\nSincerely, \\nAuthors.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe would like to express our gratitude once again for your thoughtful review of our manuscript. Your valuable insights and feedback are greatly appreciated.\\n\\nAs the extended discussion period is nearing its conclusion, we kindly remind you that there are two days remaining to share any additional comments or questions. We would be delighted to address any further concerns you might have before the discussion phase ends.\\n\\nThank you for your time and consideration.\\n\\nWarm regards, \\nAuthors.\"}", "{\"comment\": \"Dear Reviewer JRCA,\\n\\nWe sincerely appreciate your valuable time and effort spent reviewing our manuscript. We address each of your questions and concerns individually below. Please let us know if there are any comments or concerns we have not adequately addressed.\\n\\n---\\n\\n**[W1] the connection has been addressed in previous literature [1], for example attraction/repelling and normalization has been brought up in [2].**\\n\\nSince papers on contrastive learning inherently include attracting positive pairs and repelling negative pairs, they may seemingly appear similar at first glance. We clarify the differences below: \\n[1] is basically a paper on unsupervised learning, which is different from self-supervised learning. Both are in a situation where the label is unknown, but unlike unsupervised learning, self-supervised learning has an additional aspect of generating pseudo-labels from data. Therefore, in our paper, we formulate the problem using pseudo-labels (prototype representations) generated from data. In addition, the contrastive loss discussed in [1] is a variation of the triplet loss (Definition 2.3 in [1]) that operates on three samples without hard negative mining. The loss we address is a loss between a sample and prototype representations that incorporates hard negative mining. We demonstrate how this relates to an InfoNCE-type loss. Therefore, while [1] is about \\u201cclassical contrastive loss in the context of unsupervised learning,\\u201d our paper is about \\u201cInfoNCE-type loss in the context of self-supervised learning.\\u201d\\n\\n[2] explores how alignment and uniformity relate to contrastive loss (sample attraction/repulsion). In contrast, we demonstrate how our formulated supervised objective (pseudo-label attraction/repulsion) translates into contrastive loss. Thus, while [a] focuses on the properties of contrastive loss, we focus on the foundation of contrastive loss as an approximation of supervised learning. Through this process, common practices (like normalization) are unified within a single theoretical framework. Additionally, [a] focuses on an asymptotic setting (Theorem 1 in [a]) where the number of negative samples approaches infinity, whereas we address a more general setting (Theorems 1 and 2 in our paper), ensuring broader applicability.\\n\\n---\\n\\n**[W2] the framework seem to conflict with the use of projection head**\\n\\nExtracting features before the projector boosts performance, but it does not mean that self-supervised learning algorithms fail to work when features are extracted after the projector. Therefore, the improvement in performance from features before the projector is not in conflict with the proposed framework but rather can be interpreted within it.\", \"we_interpret_this_as_follows\": \"During the process where contrastive loss directly manipulates features after the projector, the features before the projector are indirectly pushed closer or farther apart, encouraging the learning of more generalized representations. This process leads to the acquisition of noise-reduced and more robust high-level features. It can also be seen as a form of regularization leveraging the information bottleneck effect.\\n\\n---\\n\\n**[W3] aggressive augmentation negatively impacts supervised learning [3]**\\n\\nThe supervised setting in [3] involves training with a cross-entropy loss (Subsection B.8.1 in [3]), which is conceptually different from our supervised setting (Equation (2) in our paper) as the loss. Thus, it is challenging to directly apply their results to our interpretation. \\n\\nHowever, we can discuss the following:\\nTheoretically, if we know that the target representation must be invariant to certain transformations, we can enforce transformation invariance by aligning the representations of transformed data. We assume knowledge of such transformations to develop the theory.\\n\\nIn practice, however, identifying these transformations is challenging, leading to some reliance on domain knowledge. This domain knowledge is not perfect, and since data augmentation inherently modifies the data, it has the potential to introduce negative effects.\\n\\nFor instance, from a human perspective, color distortion does not alter the semantic meaning of a dog in a photo, so the representation should ideally be invariant to color distortion. However, from the model's perspective, some level of color information might contribute to identifying the object as a dog.\\n\\nIn a supervised setting, where access to the label is already available, excessive data augmentation might be unnecessary. Conversely, in self-supervised learning, where robust pseudo-labels must be constructed solely from transformed data, more aggressive data augmentation is often required.\"}", "{\"summary\": \"This paper formulates self-supervised learning (SSL) as an approximation of supervised learning (SL), deriving a loss function related to contrastive losses. The author introduce the concepts of prototype representation bias and balanced contrastive loss, providing some insights into SSL. They conduct experiments to validate their theoretical results\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. They proposed a novel theoretical framework that connects supervised and self-supervised learning.\\n2. They introduction of the concepts of prototype representation bias and balanced contrastive loss, which play important roles in the connection.\\n3. They offer some practical insights based on their framework.\", \"weaknesses\": \"I think the main issue with this paper is its insufficient theoretical contribution. Using the prototype representation bias, as defined by the authors, to represent the gap between SSL and SL is overly simplistic. In reality, no practical augmentation can achieve a very low prototype representation bias unless label information is available. Moreover, augmentations with the same prototype representation bias might exhibit vastly different downstream performance, depending on the finer relationship between the augmentation and the data\\u2014a topic the authors have not addressed.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply for the rebuttal\", \"comment\": \"Thanks for the authors' efforts during the rebuttal period, but I still believe that the validation experiments related to this paper are insufficient. After reading the other reviewers' comments and the author's rebuttal, I maintain my original score.\"}", "{\"summary\": \"The submission proposes a derivation of self-supervised learning problem as an approximation of supervised learning. To this end, in supervised learning formulation the authors replace the labels with prototype representations given by an oracle. These prototype representations can then be modelled via the expected representation of objects sampled from a conditional distribution (x conditioned on label y) and across augmentations, i.e. $\\\\mathbb{E}_{t,X|y} \\\\ f(t(x))$. Learning under this formulation can be achieved via triplet loss, i.e. attracting positive samples (sample from one class) and repelling negative samples (samples from different classes).\\n\\nIn self-supervised learning, however, one has no access to labels which renders prototype representations unavailable. Instead, the authors use surrogate prototypes, i.e. expected representation of sample across its augmentations, i.e. $\\\\mathbb{E}_t \\\\ f(t(x))$. The authors then provide an upper bound on the loss, which yields objective called balanced contrastive loss, and show its connection to NT-Xent loss used in SimCLR. One may measure the bias introduce by the surrogate by taking the expectation of the difference between the true and surrogate representations, called prototype representation bias. The bias is shown to correlate with downstream performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It is safe to say that theoretical understanding of self-supervised learning methods is relatively lacking despite increasing interest and effort. Thus, the submission addresses an important topic and provides a clear connection to the supervised counterpart. It is generally well structured which makes it easy to follow, and provides a clear intuition of the approach. The approach covers typical components of self-supervised methods like Siamese networks, data augmentation and contrastive loss.\", \"weaknesses\": \"While the submission addresses an interesting connection to supervised learning, the connection has been addressed in previous literature [1], for example attraction/repelling and normalization has been brought up in [2]. Further, the consequences of the proposed framework seem to provide limited insight. While they provide supporting arguments for siamese architecture, data augmentation and infoNCE loss from a supervised perspective, the framework seem to conflict with the use of projection head and aggressive data augmentation. Let me elaborate on this further.\\n\\nIf SSL is an approximation of supervised learning, then on downstream task the use of the output of projection head should be more beneficial than the pre-projection features. However, this is not what one faces in practice. Interpreting SSL via supervised learning may inhibit understanding the use of projection head. Highlighting the mismatch between pretext and downstream tasks is important if we to gain practical consequences in designing SSL methods. The proposed interpretation, on the contrary, seem to sweep this distinction under the rug.\\n\\nFurthermore, SSL methods use more aggressive augmentation strategies than those used in supervised learning, while aggressive augmentation negatively impacts supervised learning [3]. This is also something that seem to be out of tune with the proposed approach.\\n\\nSimCLR-type losses are well understood from many perspectives, including spectral and information-theoretic [4,5], so it is not fair to render them as only intuitively and experimentally supported.\\n\\nThe authors introduce assumptions on the choice of similarity measure and use of normalization to derive the proposed loss, which is shown to generalize NT-Xent used in SimCLR. The assumptions are needed for the derivation, but I don't think one would need to additionally show their significance empirically, especially when this is already an established practice and has been ablated multiple times in the literature. Similar issue with experiments on balanced dataset. This seems to eat up space and doesn't reveal anything new about SSL methods.\\n\\nReturning to the generalization of supervised learning problem from predicting labels to predicting prototype representations, this step is important but receives limited discussion in the submission. Since there are multiple target tasks for supervised training, the ideal prototype representations are as well target-specific here. How does this affect the overall framework? \\n\\n[1] Saunshi, Nikunj, et al. \\\"A theoretical analysis of contrastive unsupervised representation learning.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[2] Wang, Tongzhou, and Phillip Isola. \\\"Understanding contrastive representation learning through alignment and uniformity on the hypersphere.\\\" In International conference on machine learning, pp. 9929-9939. PMLR, 2020.\\n\\n[3] Chen, Ting, et al. \\\"A simple framework for contrastive learning of visual representations.\\\" International conference on machine learning. PMLR, 2020.\\n\\n[4] Balestriero, Randall, and Yann LeCun. \\\"Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods.\\\" Advances in Neural Information Processing Systems 35 (2022): 26671-26685.\\n\\n[5] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" arXiv preprint arXiv:1807.03748 (2018).\", \"questions\": \"In the experimental section with balancing parameters, how are the values in the Figure 4 obtained? Is this the single run or averaged across n?\\n\\nCan you please elaborate more on $\\\\nu$ used in the proposed total loss? It seems to not be available and has not analogous term in NT-Xent loss.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Cr2E,\\n\\nWe are pleased to hear that our rebuttal addressed your concerns. \\nWe also sincerely appreciate your support for our work. \\nYour constructive feedback has been invaluable in helping us improve the paper. \\n\\nThank you once again.\\n\\nBest regards, \\nAuthors.\"}", "{\"comment\": \"**[W4] it is not fair to render them as only intuitively and experimentally supported.**\\n\\nWe mention in Section 2 that the self-supervised learning losses has been studied from various perspectives, such as covariance-based learning and maximizing mutual information. Our intention was solely to establish the foundation by deriving losses from a formulated problem. Many studies accept the losses as a given and investigate its effects or characteristics. However, we think that the phrasing could lead to misunderstanding. Therefore, we have removed the expression in question in the revised manuscript. Additionally, [4] has been added to Section 2 ([5] is already cited).\\n\\n---\\n\\n**[W5] I don't think one would need to additionally show their significance empirically**\\n\\nIn the case of similarity measures, results that align with our setting are not readily available in the literature. For instance, Table 5 of [3] does not provide an apple-to-apple comparison, as multiple components ($l_2$-normalization and temperature $\\\\tau$) are adjusted simultaneously.\\n\\nIn the case of balanced datasets, we provide experimental results under our setting for the completeness of the paper. However, agreeing that it is less critical, we have moved it to the appendix.\\n\\n---\\n\\n**[W6] the ideal prototype representations are as well target-specific here. How does this affect the overall framework?**\\n\\nWe cannot know what an ideal prototype representation truly is. However, what we can realistically do is enforce transformation-invariance by ensuring that the representations under specific transformations converge to a single point. These transformations may vary depending on the task, but once we assume they are given, the theory develops straightforwardly. Naturally, we consider the centroid (expectation) of the representations of the available transformed data as their shared target. From there, the remaining parts can be mathematically proven. In conclusion, these transformations are provided as a form of domain knowledge for a specific task, and this serves as supervision (Footnote 3 in our paper) derived from domain knowledge. We develop the theory under the setting where this is given.\\n\\n---\\n\\n**[Q1] Is this the single run or averaged across n?**\\n\\nWe calculated the accuracy by taking the average over 5 independent runs. For the scale of variability, please refer to Section A.4.2.\\n\\n---\\n\\n**[Q2] Can you please elaborate more on \\u03bd used in the proposed total loss?**\\n\\nIn the proof of Theorem 4.6, we mention the ideal case. To provide an intuitive understanding of the value $\\\\\\\\| \\\\\\\\mathbb{E}\\\\_{T', X' \\\\\\\\vert y'}f\\\\_{\\\\\\\\theta}(T'(X')) \\\\\\\\|$, let us consider a simple example. Suppose the embeddings $f\\\\_{\\\\\\\\theta}(t\\\\_1(x\\\\_1))$ and $f\\\\_{\\\\\\\\theta}(t\\\\_2(x\\\\_2))$ lie on a circle. These embeddings can then be represented as $(\\\\\\\\cos \\\\\\\\theta\\\\_1, \\\\\\\\sin \\\\\\\\theta\\\\_1)$ and $(\\\\\\\\cos \\\\\\\\theta\\\\_2, \\\\\\\\sin \\\\\\\\theta\\\\_2)$, respectively. The average of these embeddings is given by $(\\\\\\\\frac{\\\\\\\\cos \\\\\\\\theta\\\\_1 + \\\\\\\\cos \\\\\\\\theta\\\\_2}{2}, \\\\frac{\\\\\\\\sin \\\\\\\\theta\\\\_1 + \\\\\\\\sin \\\\\\\\theta\\\\_2}{2})$, which corresponds to the midpoint of the chord connecting the two embeddings. Calculating the norm of this midpoint and simplifying the expression yields $\\\\\\\\frac{1}{2} + \\\\\\\\frac{cos(\\\\\\\\theta\\\\_2 - \\\\\\\\theta\\\\_1)}{2}$. This value approaches 1 as $\\\\\\\\theta\\\\_1$ and $\\\\\\\\theta\\\\_2$ become closer, i.e., as the two embeddings move closer to each other.\"}", "{\"comment\": \"Dear reviewers and AC,\\n\\nWe sincerely appreciate your valuable time and effort spent reviewing our manuscript.\\n\\nAs reviewers noted, we present a novel (NNEK), rigorous (Cr2E, tUzQ), and well-structured (JRCA, tUzQ) theoretical framework that addresses an important topic (JRCA, Cr2E) and establishes a clear connection between supervised and self-supervised learning (JRCA, Cr2E).\\n\\nWe appreciate your valuable feedback on our manuscript. In response to the comments, we have carefully revised and enhanced the manuscript, including the followings:\\n\\n-\\tClarified the contributions of our work further\\n-\\tAdded a discussion section\\n-\\tMoved the experiments on balanced datasets to the appendix\\n-\\tIncluded additional relevant references\\n\\nIn the revised manuscript, these updates are temporarily highlighted in \\\"blue\\u201d for your convenience to check. We sincerely believe that our theoretical framework will be a valuable contribution to the self-supervised learning community.\\n\\nThank you very much,\\n\\nAuthors.\"}" ] }
54XlM8Clkg
Point Cluster: A Compact Message Unit for Communication-Efficient Collaborative Perception
[ "Zihan Ding", "Jiahui Fu", "Si Liu", "Hongyu Li", "Siheng Chen", "Hongsheng Li", "Shifeng Zhang", "Xu Zhou" ]
The objective of the collaborative perception task is to enhance the individual agent's perception capability through message communication among neighboring agents. A central challenge lies in optimizing the inherent trade-off between perception ability and communication cost. To tackle this bottleneck issue, we argue that a good message unit should encapsulate both semantic and structural information in a sparse format, a feature not present in prior approaches. In this paper, we innovatively propose a compact message unit, namely point cluster, whose core idea is to represent potential objects efficiently with explicitly decoupled low-level structure information and high-level semantic information. Building upon this new message unit, we propose a comprehensive framework CPPC for communication-efficient collaborative perception. The core principle of CPPC is twofold: first, through strategical point sampling, structure information can be well preserved with a few key points, which can significantly reduce communication cost; second, the sequence format of point clusters enables efficient message aggregation by set matching and merging, thereby eliminating unnecessary computation generated when aligning squared BEV maps, especially for long-range collaboration. To handle time latency and pose errors encountered in real-world scenarios, we also carefully design parameter-free solutions that can adapt to different noisy levels without finetuning. Experiments on two widely recognized collaborative perception benchmarks showcase the superior performance of our method compared to the previous state-of-the-art approaches.
[ "Point Cluster", "Collaborative Perception" ]
Accept (Poster)
https://openreview.net/pdf?id=54XlM8Clkg
https://openreview.net/forum?id=54XlM8Clkg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wxynwDIjEj", "tTkDFekm0D", "tJvIg8aZVv", "rlwppfD3aB", "rhWg9CSzSH", "pT7jCiT4g5", "oFlTZNQE7O", "o47kP5cx7w", "a3wlDiiFdC", "TFDTVtpDKI", "SAyo0Ietdo", "JFfwErtj6E", "HiONFPwAAK", "5HpXMzMiAD", "5DaxpJGsQ0", "06xXTfT6jQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732118784414, 1732433776049, 1732685918596, 1730710753906, 1729484816187, 1734795936607, 1732118668732, 1731427426134, 1732118057882, 1732118729661, 1730718250626, 1732607246968, 1737524229965, 1732482180248, 1732117942748, 1732486187403 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13011/Authors" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_4p6F" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_tujs" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_xrYN" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_4p6F" ], [ "ICLR.cc/2025/Conference/Submission13011/Area_Chair_Rwvx" ], [ "ICLR.cc/2025/Conference/Submission13011/Authors" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_tujs" ], [ "ICLR.cc/2025/Conference/Submission13011/Authors" ], [ "ICLR.cc/2025/Conference/Submission13011/Authors" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_1x7n" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_xrYN" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13011/Reviewer_1x7n" ], [ "ICLR.cc/2025/Conference/Submission13011/Authors" ], [ "ICLR.cc/2025/Conference/Submission13011/Authors" ] ], "structured_content_str": [ "{\"comment\": [\"Thank you for your valuable comments and kind words to our work. Below we address specific questions.\", \"**Q1: Ablation studies on PCP, PCA, pose correction and latency compensation.**\", \"**About PCP.** Without the PCP module, which achieves message packaging for other agents, this method degrades to single-agent perception for the ego agent without messages from other agents. We evaluate this baseline on the validation set of V2XSet, achieving AP\\\\@0.7 of 67.00, which is far behind our full CPPC that achieves AP\\\\@0.7 of 89.99. These results demonstrate the limitations of single-agent perception and validate the necessity of collaboration for achieving more complete scene perception.\", \"**About PCA.** Without the PCA module, which is designed to aggregate point cluster features from multiple agents, we directly merge boxes in messages from multiple agents like late collaboration. This setting is evaluated on the validation set of V2XSet, achieving AP\\\\@0.7 of 77.88, which is far behind our full CPPC that achieves AP\\\\@0.7 of 89.99. This validates that our PCA module can effectively utilize rich object information.\", \"**About pose correction.** We evaluate our CPPC under a positional error of 0.6m and a heading error of 0.6$^\\\\circ$ without the pose correction module, achieving an AP\\\\@0.7 of 34.22 on the V2XSet validation set. This result is significantly lower than the AP\\\\@0.7 of 87.37 achieved with the pose correction module, highlighting its critical role in enhancing performance.\", \"**About latency compensation.** We evaluate our CPPC under time latency 500ms without the latency compensation module, achieving an AP\\\\@0.7 of 62.08 on the test set of DAIR-V2X-C. This result is below than the AP\\\\@0.7 of 64.61 achieved with the latency compensation module.\", \"**Q2: Comparison with BEV-based methods during aggregation.**\", \"In Appendix C.2, we theoretically analyze that the computational complexity of point cluster aggregation is approximately linearly related to the number of potential targets in the scene, i.e., $\\\\mathcal{O}(DN_\\\\text{object})$, where $D$ is a constant. However, the computational complexity for BEV-based message aggregation is $\\\\Omega(HW)$, which is related to the square of the perception range. Since $N_\\\\text{object} \\\\ll HW$ in most real scenes, our point cluster-based method is more efficient for long-range aggregation. We evaluate the inference time cost of our PCA module, i.e., 9.69 ms, and aggregation module in V2X-ViT, i.e., 41.42 ms.\", \"**Q3: Spatial cluster matching when pose errors exist.**\", \"**Details of spatial cluster matching.** We calculate the Euclidean distance between the centers of point clusters detected by different agents. If the distance between two point cluster centers is less than a specified threshold $\\\\epsilon_\\\\text{pose}$, they are considered to belong to the same potential object.\", \"**Cluster matching under pose errors.** When pose errors are present, aligning the coordinate space of other agents with the ego agent may result in misaligned point clusters. To address this issue, we first obtain an initial matching result through spatial cluster matching. Subsequently, we optimize Eq. (7) to estimate the accurate poses of the surrounding agents, thereby correcting pose errors to achieve more reasonable matching based on the refined poses. The experiments depicted in Fig. 4 (c) and (d) demonstrate that the proposed strategy performs effectively under various pose error scenarios.\", \"**Extension work.** We appreciate your thoughtful suggestion and have carefully considered this aspect of the method. Indeed, in scenarios with significant pose errors\\u2014though rare\\u2014it becomes challenging for a manually set threshold to address all cases. To enhance the robustness of spatial cluster matching, we extend our current approach to a maximum common subgraph (MCS)-based spatial matching [1]. In this method, each cluster mapping within the MCS represents point clusters corresponding to the same potential object across the cluster graphs of different agents.\", \"[1] A Partitioning Algorithm for Maximum Common Subgraph Problems, IJCAI, 2017.\"]}", "{\"comment\": \"Thanks for the authors' reply. It solves my problems and I do not have any other questions.\"}", "{\"title\": \"Response to author rebuttal\", \"comment\": \"The responses of authors address most of my concerns. I will maintain my rating.\"}", "{\"summary\": \"This paper introduces a communication-efficient collaborative perception (CPPC) framework for vehicle-to-everything autonomous driving. Unlike previous methods that mainly rely on BEV features, the CPPC framework leverages point clusters to control its computational complexity and alleviate issues like high-level information loss. Specifically, the CPPC framework consists of a point cluster picking module, a pose alignment and latency compensation module, and a point cluster aggregation module to generate cluster features for subsequent predictions. The CPPC framework outperforms existing BEV-based methods on three public datasets, including V2XSet, OPV2V, and DAIR-V2X-C.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"\\\\+ This paper is written well and the motivation behind the proposed method is easy to follow.\\n\\n\\\\+ The proposed method is a systematic solution for multi-agent perception, which exploits point clusters as the intermediate representation with controllable communication costs, to balance efficiency and effectiveness. \\n\\n\\\\+ The proposed method outperforms several previous methods on three public datasets consistently. The extensive experimental results validate the effectiveness of the proposed modules.\", \"weaknesses\": \"\\\\- There are four thresholds used in the proposed method, which may be hard to choose in complex or unseen scenarios.\\n\\n\\\\- The proposed method mainly considers BEV-based methods for comparisons, however, it is unclear how significant its technical contributions are compared with previous point-cloud-based methods (e.g., Cooper, F-cooper). I am concerned about this because the techniques used in the main components of the proposed method, including point cluster picking, pose correction, and the SD-FPS, are quite common in the research communities of point cloud processing.\", \"questions\": \"My current concerns are mainly about the technical contributions of the proposed method. I may change my rating after reading other reviewers' comments and the authors' rebuttal. Here are some questions I wish could be addressed:\", \"q1\": \"In the proposed SD-FPS method, how to balance the weights of semantic and spatial distances ($\\\\lambda_s$ and $\\\\lambda_d$)? How will they affect the performance of the proposed method?\", \"q2\": \"Does the proposed method perform well in a crowded environment where many objects are exhibited?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to use Point Cluster as the message unit for collaborative perception. Previous intermediate methods use BEV map as the message unit, which suffer from weak object features, inefficient message aggregation and vague boundary. The proposed point cluster-based framework can solve these problems well. Extensive experiments on V2XSet, OPV2V, and DAIR-V2X-C demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Using point cluster as an intermediate message unit is novel and well-motivated.\\n2. The proposed CPPC framework, as well as the PCE, PCP and PCA modules are solid.\\n3. State-of-the-art performance on mainstream benchmarks.\", \"weaknesses\": \"My main concerns are about the effectiveness and efficiency of the proposed method, for which I think more ablation studies are required.\\n1. Ablation studies on PCP, PCA, pose correction and latency compensation are required.\\n2. The authors claim the BEV representation is inefficient during aggregation. Is there any comparison between BEV and Point Cluster-based methods?\", \"questions\": \"In section 3.5, can the authors detail \\\"spatial cluster matching\\\"? If the pose between two agents is not accurate, how to match point clusters from different agents into a single object?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper looks at the problem of collaborative perception, taking the perspective of minimizing size (and hence bandwidth) while retaining essential structural and semantic information. To do this, the Point Cluster representation is proposed, which represents objects via point coordinates, a cluster center, and point features. This representation is integrated into a new framework for packing, aggregating, and decoding messages for transmission. Results are shown across a number of collaborative perception benchmarks, demonstrating improved results over a number of competing methods.\\n\\n Reviewers appreciated the perspective of early, intermediate, and late collaboration as well as the method's improved bandwidth efficiency, performance, and and clarity of the writing. The addition of experiments with respect to real-world robustness, including pose errors and time latency, was also mentioned. A number of weaknesses were raised by reviewers, including methodological similarity to existing methods both in this (Cooper, F-Cooper) and other fields (FSD), unexplained phenomena in some of the graphs (e.g. performance spikes) as well as overall lack of clarity in the experiment section, lack of ablations, and sensitivity to the hyper-parameters. The authors provided a comprehensive rebuttal, including additional results especially analyzing early vs. late collaboration and relationship to foreground information. Reviewers expressed that this rebuttal satisfied most of their concerns, and all recommended acceptance. \\n\\n Based on this, I recommend acceptance of this paper. Overall, the paper provides a nice perspective of levels/types of collaboration and an interesting method that balances compression and retention of relevant information, both structural and semantic. I encourage the authors to incorporate all of the new discussions and results in the main paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised a number of concerns, and authors provided a comprehensive rebuttal including additional results/analysis. All reviewers mentioned that the rebuttal addressed their concerns and all recommended acceptance.\"}", "{\"comment\": [\"**Q4: Sharply rises in Figure 4 (a).**\", \"This is a good question. When the communication volume is limited to approximately 14, the performance of our CPPC drops sharply. We believe that under such a stringent communication bandwidth constraint, the sampled points in the information packaging stage become overly sparse. Consequently, the remaining key points fail to adequately represent the shape characteristics of potential objects, making it challenging to generate high-quality detection boxes.\", \"When the communication volume is 14, we can calculate the spatial confidence mask rate in the state-of-the-art method, i.e., Where2comm, according to eq (10) in appendix C.1. The number of allowed collaborative message units is calculated: $N=\\\\frac{2^{Comm}\\\\times8}{16\\\\times C}=\\\\frac{2^{14}\\\\times8}{16\\\\times256}=32$, indicating mask rate is 0.9948 for BEV feature maps with shape $48\\\\times 128$. This means that only around 0.5\\\\% BEV features are allowed to be transmitted, which are almost equivalent to no collaboration, with 55.54 AP\\\\@0.7 on the test set of DAIR-V2X-C.\", \"Overall, even under such strict communication volume constraints, our CPPC achieves 64.32 AP\\\\@0.7, outperforming Where2comm by an absolute margin of 8.78 in AP\\\\@0.7. This result validates the superiority of our point cluster-based communication mechanism.\", \"**Q5: It is unclear which method the accuracy improvement results are compared with.**\", \"On each dataset, we compare our approach with the current best-performing methods: OPV2V on V2XSet, CoAlign on OPV2V, and V2X-ViT on DAIR-V2X-C.\", \"**Q6: Technique of late collaboration method.**\", \"Late collaboration approaches adopt bounding boxes as message units. We implement this late collaboration baseline by directly using $\\\\ddot{B}^s$ for point clusters matching the same potential objects, i.e., in $M_\\\\text{share}$, and $B$ for objects exclusively observed by a single agent, i.e., in $M_\\\\text{unique}$, as the final outputs, respectively. Thus, the late collaboration can not benefit from completing object semantic and structure information from mutli resources in PCA and use PCD (Appendix A.2) to refine them.\", \"**Q7: Solution for delay-induced errors that vary by object.**\", \"This is a great question. In this paper, we evaluate the robustness to time latency on the test set of DAIR-V2X-C, which includes 10 km of city roads, 10 km of highway, 28 intersections, and 38 km\\u00b2 of driving regions, encompassing diverse weather and lighting conditions from real-world scenarios. Therefore, we believe this evaluation is sufficiently comprehensive to validate the effectiveness of our latency compensation module.\", \"However, we acknowledge that our existing solution has limitations. For instance, when objects move at high speeds, the artificially defined upper and lower bounds for temporal cluster matching may struggle to handle such significant displacements. We appreciate you highlighting this issue and propose addressing it by introducing point cluster flow prediction to mining temporal associations in cluster feature space in future works.\"]}", "{\"summary\": \"This paper presents a new message unit, the \\\"point cluster,\\\" to improve collaborative perception efficiency in multi-agent systems. Unlike existing message units such as raw point clouds, bounding boxes, or BEV maps, the point cluster format minimizes bandwidth usage while retaining essential structural and semantic information. Representing objects with point coordinates, a cluster center, and semantic features, this approach allows efficient inter-agent information exchange, enhances object alignment, and preserves object structure for more accurate detection. A new framework, CPPC, combines point packing and aggregation modules, addressing issues like bandwidth constraints, time delay, and pose errors, and achieves state-of-the-art performance on several benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Bandwidth Efficiency**: The proposed \\\"point cluster\\\" message unit significantly reduces communication bandwidth by capturing only essential foreground object information in a sparse format, making it highly efficient compared to dense representations like raw point clouds and BEV maps.\\n\\n2. **Enhanced Object Detection Accuracy**: By preserving detailed structural and semantic information, the point cluster improves object detection accuracy, especially in complex multi-agent scenarios, demonstrating superior performance on established collaborative perception benchmarks.\\n\\n3. **Robustness to Real-world Challenges**: The CPPC framework includes robust mechanisms for handling pose errors and time delays, crucial for real-world applications. Parameter-free solutions for pose and latency issues make it adaptable to various levels of noise without additional tuning.\\n\\n4. **Clear and Effective Writing**: The paper is well-written, with a clear explanation of the proposed methods and thorough descriptions of experiments, which makes the complex technical content accessible and supports the credibility and reproducibility of the research findings.\", \"weaknesses\": \"In terms of methodology, there is a noticeable reliance on FSD, which slightly reduces the originality. However, overall, the approach is still reasonable.\", \"questions\": \"No questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your valuable comments and kind words to our work. Below we address specific questions.\\n\\n**Q1: Criteria for determining which points should be selected.**\\n- As shown in Algorithm 1 (Appendix A.1), our proposed SD-FPS strategy prioritizes points in point clusters with clear semantic features or key structural information. Concretely, we measure the ambiguous of each point by its confidence score $s_f$ in segmentation head of PCE, where a higher $s_f$ indicates richer semantic information for distinguishing objects. As for structural measures, the distribution density score $s_d$ is inversely proportional to its density estimation: $\\\\frac{1}{\\\\mathcal{N}(p)}\\\\sum_{q\\\\in\\\\mathcal{N}(p)}K(p,q)$, where $\\\\mathcal{N}(p)$ is a point set in the nearby area of $p$ , and $K(\\\\cdot,\\\\cdot)$ is a gaussian kernel function to measure the similarity between position of two points. Thus, a lower $s_d$ indicates removing it will not significantly affect the object shape.\\n\\n**Q2: Performance degradation when background information is included.**\\n- We include all foreground and background points by setting the foreground threshold in PCE to zero. Experimental results on the V2XSet validation set indicate an AP\\\\@0.5 of 91.37 and an AP\\\\@0.7 of 89.23, reflecting absolute decreases of 0.64 and 0.76, respectively. The primary cause of this performance degradation is the introduction of background points, which can distort the shape description of potential objects during clustering. However, the negative impact is relatively minor, as the PCP phase retains only point clusters with positive proposals (line 229 in paper). Notably, while the inclusion of background points has limited impact on performance, it substantially increases communication bandwidth requirements.\\n\\n**Q3: Performance analysis in early or late collaboration by only bringing foreground information.**\\n- Early collaboration is expected to achieve the highest performance when bandwidth constraints are not a factor, as it avoids information loss during communication. We agree that transmitting only foreground point clouds is an effective strategy for early collaboration methods to maintain accuracy while reducing bandwidth usage. To validate this, based on the early collaboration method that directly aggregates all raw point clouds, we introduce our trained foreground point segmentation network into the fusion pipeline. By applying an appropriate threshold to filter foreground points, we reduce the communication volume of the early collaboration method to a similar magnitude as other methods, enabling a fair comparison.\\n\\n| **Method** | **AP\\\\@0.5** | **AP\\\\@0.7** | **Communication Volume** |\\n|:---------------------------------------------|:------------:|:------------:|:--------------------------:|\\n| V2X-ViT | 67.44 | 51.87 | 15.62 |\\n| Where2comm | 71.22 | 60.22 | 15.66 |\\n| Early collaboration w/ foreground filtering | 75.01 | 65.71 | 15.32 |\\n| CPPC | 76.89 | 69.39 | 15.46 |\\n\\n- On the DAIR-V2X-C test set, the early collaboration method with foreground filtering achieves 75.01 AP\\\\@0.5 and 65.71 AP\\\\@0.7, significantly outperforming Where2comm and V2XViT under the similar communication volume. Although the early collaboration with foreground filtering demonstrates a certain level of competitiveness, our approach still exhibited significantly superior accuracy, achieving performance improvements of 1.88 in AP\\\\@0.5 and 3.68 in AP\\\\@0.7. This improvement is attributed to the introduction of the point cluster feature and the more reasonable keypoint selection strategy employed in SD-FPS. In contrast, later collaboration methods use bounding boxes as message units, which primarily contain only foreground information.\"}", "{\"comment\": \"Thank you for your valuable comments and kind words to our work. Below we address specific questions.\\n\\n**Q1: Thresholds may be hard to choose in complex or unseen scenarios.**\\n- This is a great question. We ablate the thresholds involved in our CPPC in Tables 5 and 6. The results show minimal performance fluctuations over a wide range, demonstrating that our method is robust to threshold variations. Furthermore, after determining these thresholds through ablation experiments on the validation set of the V2XSet dataset, we directly trained and tested on the OPV2V dataset under the same threshold settings, achieving state-of-the-art performances. This validates the generalization ability of our method.\\n\\n**Q2: Comparision with previous point-cloud-based methods.**\\n- **CPPC vs. Cooper.** Cooper is a foundational early collaboration method. Through cooperative sensing, Cooper can merge and align the shared data that is collected from nearby vehicles, which may provide data scopes coming from different positions and angles. Cooper relies on three types of human-designed ROI-based rules to select point clouds to meet actual bandwidth constraints, which may not generalize to complex real scenerios like DAIR-V2X-C. Differently, our CPPC directly transmits sparse point clusters to the ego agent, which achieves the state-of-the-art performance-bandwidth balance.\\n- **CPPC vs. F-Cooper.** Both CPPC and F-Cooper are intermediate collaboration methods. Constrained by its voxel-based representation and RPN network, F-Cooper must expand the feature map size during the information aggregation stage to cover the perception range of different agents. This leads to unnecessary computations when collaborative agents are far apart. In contrast, the computational complexity of point cluster aggregation in CPPC depends solely on the number of potential objects, effectively avoiding this issue. We evaluate F-Cooper on the V2XSet and DAIR-V2X-C datasets, achieving an AP\\\\@0.7 of 87.06 on the V2XSet test set and 60.31 on the DAIR-V2X-C test set. In comparison, CPPC outperforms F-Cooper with absolute improvements of 2.49 and 9.08 in AP\\\\@0.7, respectively.\\n- **Novelty of implementation techniques.** The core contribution of our paper is the introduction of a novel point cluster-based communication paradigm for collaborative perception. Building on the concept of point clusters, we develop a comprehensive system, CPPC, which achieves a state-of-the-art trade-off between performance and bandwidth, alongside robustness to various noisies. To implement our approach, we enhance existing technologies (e.g., FPS) due to their wide applicability within the point cloud processing community. Crucially, it is the design of the point cluster that enables the integration of these technologies into the domain of collaborative perception\\u2014an advancement that was not feasible with previous methods. We will explore novel implementation techniques which can further improve our CPPC in the future.\\n\\n**Q3: Ablation of the weights of semantic and spatial distances.**\\n- We evaluate different weights, $\\\\lambda_s$ and $\\\\lambda_d$, for semantic and density scores, respectively, with a sample ratio of 1/16 on the test set of DAIR-V2X-C. For simplicity, we set $\\\\lambda_d=1-\\\\lambda_s$. Best performances are achieved when $\\\\lambda_d=0.4, \\\\lambda_s=0.6 $.\\n| **Method** | 0.0 | 0.2 | 0.4 | 0.6 | 0.8 | 1.0 |\\n|------------|-------|-------|-------|-------|-------|-------|\\n| **AP\\\\@0.5** | 76.14 | 76.28 | 76.41 | 76.47 | 76.32 | 76.16 |\\n| **AP\\\\@0.7** | 69.01 | 69.07 | 69.22 | 69.03 | 68.84 | 68.62 |\\n\\n**Q4: Crowded environment performances.**\\n- This is a great question. We analyze the target count in DAIR-V2X-C, as crowded environments typically involve a large number of agents. The dataset includes 42 scenarios with 30 or more targets. Our CPPC achieves an AP\\\\@0.5 of 70.83 and an AP\\\\@0.7 of 63.45, both surpassing the overall performance of existing methods, demonstrating its robustness in crowded environments.\"}", "{\"summary\": \"This paper proposes a novel message exchange unit called point cluster for collaborative perception. Point clusters can efficiently and compactly represent an object's location, structure, and semantic information. The proposed CPPC framework includes point cluster-based encoding, packing, exchange, and integration methods. CPPC improves both communication efficiency and perception accuracy while being robust to various real-world noises. Experimental results on various benchmarks demonstrate that CPPC significantly outperforms existing methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This paper systematically analyzes the advantages and disadvantages of existing collaborative perception methods (early, intermediate, late collaboration) and proposes a rational solution considering the inherent trade-off between communication efficiency and perception performance.\\nThe proposed point cluster is a compact message unit that selectively combines the strengths of existing methods and efficiently represents both structural and semantic information of objects. \\nCPPC introduces parameter-free approaches to handle pose error and time latency issues encountered in real-world scenarios, ensuring robustness to various noise levels. \\nThis paper clearly identifies the current challenges and limitations in the field of collaborative perception and demonstrates how the proposed method can rationally solve them.\\nThrough clear problem formulation, the collaborative perception problem is mathematically defined, and based on this, the motivation and principles of the proposed method are lucidly explained.\", \"weaknesses\": [\"Based on the proposed technique, the amount of data that can be transmitted is limited. Therefore, it is crucial to prioritize foreground objects and bring them in a sparse manner to distinguish the overall shape, rather than bringing only a few parts. Regarding this aspect, I wonder if there are any criteria or tendencies for determining which points should be selected and what rules should be followed. Additionally, I am curious about the extent of performance degradation when background information is included instead of solely focusing on the foreground.\", \"If we follow the logic mentioned above, I wonder if we can expect sufficient performance improvement in early or late collaboration by only bringing foreground information and adjusting it according to the available bandwidth.\", \"In Figure 4(a), there is a section where the performance of the proposed method rises sharply. I am curious about the reason behind this phenomenon and how the performance of the proposed method would differ from existing technologies if the communication volume is lower than this point.\", \"On Page 7, Line 373, it is mentioned that the proposed method shows improvements of 5.7%, 7.3%, and 12.8%, but it is unclear which methods are being compared.\", \"On Page 7, Line 377, the term \\\"late collaboration\\\" is used. I am curious about which specific technique this refers to.\"], \"questions\": \"If we assume some objects are moving at high speeds, I think the errors caused by delay would vary for each object. It seems this aspect might not be covered, does the proposed method handle this problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the reply from the authors and my concerns have been addressed. Hence, I have decided to maintain my rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the authors' response. All concerns about the methodology and weaknesses I raised have been addressed. However, the experiment section is still difficult to read.\\nWhen discussing performance improvements, the baseline methods for comparison is not directly specified. For its clarity and quick comprehension, I encourage the authors to revise this section.\"}", "{\"comment\": [\"Thank you for your valuable comments and kind words to our work. Below we address specific questions.\", \"**Q1: Relationship with FSD.**\", \"**Single-agent perception vs. multi-agent collaborative perception.** The goal of single-agent perception is to enhance perceptual capabilities within the scope of the ego view. In contrast, multi-agent collaborative perception seeks to address the occlusion limitation inherent to single-agent systems by facilitating complementary information exchange between surrounding agents. A key challenge in multi-agent collaborative perception is achieving superior scene-level perception performance under constrained communication costs. The development of multi-agent approaches must address challenges such as packaging observation information in a bandwidth-efficient manner, effectively aggregating multi-source data, and ensuring robustness against issues like pose errors and time latency\\u2014challenges absent in single-agent approaches.\", \"**FSD vs. CPPC.** We agree that both our CPPC and FSD both utilize sparse point cloud representations for 3D object detection instead of dense BEV representations. However, the contributions differ fundamentally:\", \"For single-agent perception, FSD innovatively proposes a fully sparse pipeline, addressing the issue of center feature missing and eliminating the need for time-consuming neighborhood queries in purely point-based methods. It achieves state-of-the-art performance in 3D object detection while being much faster than previous detectors, especially in long-range settings. Although FSD has greatly promoted the development of point cloud-based sparse detectors, it did not consider the communication-related issues faced by multi-agent collaborative perception.\", \"For multi-agent collaborative perception, CPPC introduces a novel communication paradigm based on our proposed message unit, i.e., point cluster, which effectively addresses key challenges in existing BEV-based intermediate collaboration approaches. These challenges include object feature degradation during message packing, inefficient message aggregation for long-range collaboration, and the communication of implicit structural representations. First, by prioritizing semantically and structurally rich points in point clusters through our proposed SD-FPS strategy, CPPC achieves the state-of-the-art performance-bandwidth balance. Second, the computational complexity of point cluster aggregation scales efficiently with the number of potential objects in the scene, rather than increasing quadratically with the perception range, making it highly suitable for long-range collaboration. Furthermore, extensive experiments demonstrate that CPPC maintains robustness to a wide range of pose errors and time latency without additional fine-tuning, thanks to the explicitly preserved coordinate information within the point clusters.\"]}", "{\"comment\": \"Thank you for your suggestion. We have updated the draft and uploaded it, with the changes highlighted in blue. The updates are as follows:\\n1. The second-highest performance accuracy is now highlighted in blue in Table 1. Additionally, we specify the comparison method in the text description (lines 372\\u2013376).\\n2. A detailed description of the late collaboration baseline has been added (lines 377\\u2013378).\"}" ] }
54KcduuYeG
AutoScale: Automatic Prediction of Compute-optimal Data Compositions for Training LLMs
[ "Feiyang Kang", "Yifan Sun", "Bingbing Wen", "Si Chen", "Dawn Song", "Rafid Mahmood", "Ruoxi Jia" ]
Domain reweighting is an emerging research area aimed at adjusting the relative weights of different data sources to improve the effectiveness and efficiency of language model pre-training. This paper demonstrates that the optimal composition of training data from different domains is scale-dependent, challenging the existing practice of determining optimal mixtures through small-scale experiments and directly applying them at larger scales. We derive an analytical model for the dependence of optimal weights on data scale and introduce *AutoScale*, a novel, practical approach for optimizing data compositions at potentially large training data scales. *AutoScale* first uses a principled optimization framework to find optimal compositions at smaller, feasible scales, then predicts optimal compositions at larger scales using our derived model. Our evaluation on GPT-2 Large and BERT pre-training demonstrates *AutoScale*'s effectiveness in improving training convergence and downstream performance. Particularly, for GPT-2 Large on RedPajama, *AutoScale* decreases validation perplexity 28% faster than baselines, with up to 38% speed-up over unweighted training, achieving the best performance across downstream tasks. This work provides insights into the varying benefits of data sources across training scales for language models, contributing to the burgeoning research on scale-dependent data curation. Code is open-sourced
[ "Data Curation", "Data Composition", "Scaling Laws", "Data-centric AI", "Large Language Models (LLM)" ]
Reject
https://openreview.net/pdf?id=54KcduuYeG
https://openreview.net/forum?id=54KcduuYeG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y3lraQKukE", "toziS28mTb", "t3kI3lM8SY", "cKUeVE5gDr", "VS0v8HiHT8", "VLZY6zcNvT", "TakEwsxTgj", "Nxrhbp9ZOq", "LR8pXWbdMZ", "IOnV6omabz", "I3MDifjjQV", "HvUCGTP8rb", "HFl5OuxA9x", "GjGVFc0JOR", "CuwdmUtTyf", "Bz1JMeu9c9", "Aif1CMpCrT", "6NQOdZR9tf", "2OfpLJrQzk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733011598739, 1732076041305, 1732506497910, 1730757262739, 1734659066844, 1733190779794, 1737523464692, 1732840492685, 1732938463754, 1732076153108, 1733113311218, 1732332888362, 1732076081419, 1730505021458, 1730793713621, 1731107202067, 1732332973087, 1732332997234, 1733190730967 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_Ad96" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_JgHF" ], [ "ICLR.cc/2025/Conference/Submission1690/Area_Chair_C33m" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_Qo6P" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_Ad96" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_mxrW" ], [ "ICLR.cc/2025/Conference/Submission1690/Reviewer_Qo6P" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ], [ "ICLR.cc/2025/Conference/Submission1690/Authors" ] ], "structured_content_str": [ "{\"title\": \"Extended rebuttal period ends in 2 days\\u2013we anticipate your feedback! \\ud83d\\ude42\", \"comment\": \"Dear Reviewer JgHF,\\n\\nWith the extended rebuttal/author discussion period ending in 2 days, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it.\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"Response by Authors (1/3)\", \"comment\": \"We thank the reviewer for the review and appreciate the feedback. Here we provide explanations for the listed weaknesses and responses to the questions.\\n\\n---\\n\\n## 1. Experiment setup\\n\\n### a. Learning rate schedule\\n\\nThis work follows the standard setup for training GPT-2-style LMs with a linear learning rate schedule and a 10% warmup ratio. The learning rate schedule and warmup steps are relative to the total steps in each training run, automatically scaled for each experiment. We documented the hyperparameters for pre-training GPT-2 Large models in Table 2.\\n\\n### b. Number of epochs for different domains\\n\\nWe agree with the argument that sampling different domains with different numbers of epochs could lead to misaligned results and confusing comparisons. **Thus, in this work, we strictly controlled this factor to ensure in every training run, regardless of its mixture, we train on data from each domain with the same number of epochs.** The source dataset used in this work, RedPajama, is large enough so we are always able to obtain the target amount of non-repeating samples for every data mixture.\\n\\n### c. Comparative evaluation\\n\\nTo the best of our knowledge, **there are no existing solutions for determining the ground truth optimal domain weights, except by computationally intractable exhaustive grid search with training the full model on all combinations of domain weights.** *[DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, 2023]* is one of the earliest works on optimizing training data mixture for pre-training LLMs and has been widely referred to as the main baseline in subsequent works. Thus, we formulate the data optimization problem in Section 3.1 as a bilevel optimization problem where no direct solution is computationally tractable. Consequently, we developed the Direct Data Optimization (DDO) algorithm in Section 3.2 which leverages scaling law analysis to provide a global approximation to the original optimization problem, **achieving high-accuracy optimization results with plausible computation complexity. This already pushed forward the frontier of solving the original data optimization problem.** But still, we do not settle for a solution that requires training the model with a full data budget multiple times. This led to the development of the novel AutoScale tool, which automatically predicts optimal training data compositions at larger scales based on compositions optimized at smaller scales.\"}", "{\"comment\": \"Could the author's paste their response to each of the questions ? It seems you have answered Reviewer Qo6P's comments in this comment, but not all of mine,\"}", "{\"summary\": \"This work presents a method to estimate optimal data domain weights at large training-data scales by extrapolating via exponential functions fit to smaller-scale training runs. The proposed method is evaluated on GRT-2 Large + RedPajama and BERT pretraining, and compared to extant method baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The general problem the authors attempt to address is important, and the assessment that present methods are limited and that performance headroom is available is well-framed. The code is open-sourced. The evaluations presented are limited, but positive.\", \"weaknesses\": \"In general, the writing is difficult to parse. It is frequently frustratingly vague, including in the Abstract, where the actual method is alluded to but not elucidated. In the actual methods section, important questions about the method are unanswered, leaving the method underspecified (is the learning rate schedule (linear? presumably linear decay?) the same for the tuning runs as for the final run? Is the decay timing adjusted to the compute budget? The value of the final validation loss hinges critically on this - yet it goes unmentioned). There is no addressing of the profound difficulties this method (and others like it) can be expected to have around epochs for individual datasets. A more thorough analysis would identify and investigate this issue with experiments demonstrating specific datasets being sampled for > 1 epoch, and the subsequent breakdown of the \\\"scaling law\\\" prediction. Evalutation is purely comparative to other methods, and does not assess to what extent the predicted 'optimal' values might differ from more expensively traditionally-derived 'optimal' values. No discussion of the relative cost of the method (with its 'linearly scaling' cost in the number of datasets) is mentioned, though it is clear it would become prohibitively expensive for dataset mixtures with more than a handful of individual datasets. The method proposed is prohibitively expensive at large model sizes, and seems unlikely to scale to larger compute budgets even at small dataset sizes due to the issue of datasets passing through multiple epochs, which is unaddressed in this work. This limitation goes unmentioned.\", \"questions\": \"Fig 1: [a] suggests that you've tuned 6 models between 30M and 1.2 B tokens, yet [c] shows only three models being used to fit the predictor model. Why is that? where are the other data points? And are *all* of the linear fits R2=0.998? Is that the average R2? Also, [d] shows the predictions of the model extrapolated past 1.2 B to 307 B? Why are you not showing the training data points (between 30M and 1.2 B) as well? And isn't the largest model you look at trained to 10B? why show this extrapolation to so far beyond where you explore? This seems misleading. The x-axis should say (log scale) as well. In [b] the color used for the 1.2B model is the color used for the 0.6B model in [a]. And there is a typo in the title ('scale - depedent' -> \\\"dependent\\\"). In [e] the 38% improvement looks to be overstated due to the noise of those evaluation curves, you could just as easily pick out the peak in Autoscale curve at step 86k and the point in the Uniform curve at step 100k to get a smaller improvment result with the same underlying data.\", \"table_1\": \"boolq has the Autoscale value bolded as 'best' but the Data Mixing Laws value is greater. Also, consider place your method on the bottom row separated by a thin line.\", \"fig2\": \"What is being depicted here? Is this showing power laws being fit to 3 empirical datapoints? Is the first column of points supposed to be at 0? It looks like the points are at [0.2, 1, 3] on the x-axis?\", \"nits\": \"\", \"throughout\": \"\\\"AutoScale\\\" is consistently the wrong font size. Please fix. Similarly, in section 5.2 the font size of the methods needs to be fixed. And in line 418 'from' is included in the method font instead of the default font.\", \"181\": \"work contribute -> contributes\", \"379\": \"N^(1)* is missing the N in summation\", \"465\": \"much lowered -> much lower\\n155 'a consistent shift can be observed', please be more specific, what is shifting, how is it consistent?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper attempts to tackle the challenging problem of optimizing data mixing ratios for language model training across different scales. The authors propose a computationally efficient approach by first analyzing domains independently to derive power law scaling parameters, then solving a convex optimization problem to determine optimal mixing ratios. They further introduce AutoScale, an iterative method for predicting mixing ratios at larger scales by interpolating between known solutions, addressing the practical challenge that optimal data mixing strategies often evolve with model scale.\\n\\nThe paper's key strengths lie in its decomposition of an intractable optimization problem into manageable components and its principled mathematical framework. The empirical validation across both autoregressive (GPT-2) and bidirectional (BERT) architectures demonstrates practical utility. The work effectively transforms what has historically been a largely empirical process into a more systematic approach, potentially reducing the computational overhead in developing large language models.\\n\\nThis paper exhibits several notable limitations that could be improved. First, as mentioned in most reviewers, the experimental scope is constrained by using only 3B tokens (in Table 1, the most important table to show the improvements), which is insufficient for a paper studying language model pre-training and should be extended to larger token counts. In practice, we usually use Training-Compute Optimal data allocation for a model pre-training, and it should be nearly 10B tokens for a 774M GPT model. Second, the discussion with previous work is inadequate, particularly in its omission of the Data Mixing Laws (DML) work, which pioneered the application of scaling laws to data mixing - this relationship should be explicitly addressed to properly contextualize the current contributions. Additionally, the methodology used to establish scaling laws raises concerns, as traditional scaling law studies typically analyze thousands of data points to demonstrate consistent patterns, while this study's limited data points (at most 1.2B token budget) make it difficult to confidently characterize the observed relationships as scaling laws. Finally, I suggest reconsidering the term \\\"optimal data composition\\\" used throughout the paper, as it may be over claiming the findings. True optimality would require exhaustive testing of all possible data compositions, which is nearly impossible. A more precise term, such as \\\"effective data composition\\\" or \\\"improved data composition\\\" would better reflect the actual extent of the optimization conducted. Additionally, the absence of comparisons against DoReMi on the original Pile dataset leaves some questions on the experiments, also.\\n\\nOverall, reviewers have a mixed feeling of this paper. I have carefully read all the reviews and the author rebuttal, and have read the paper again. While the mathematical framework and formulation are commendable, the experimental validation requires substantial strengthening.\", \"additional_comments_on_reviewer_discussion\": \"While reviewing this paper, it's worth noting that one reviewer (JgHF) did not participate in the rebuttal discussion, and consequently, their opinion carried less weight in the final decision-making process. The remaining reviewers engaged with the authors' rebuttal and contributed to a constructive discussion. After careful examination of both the paper and the arguments raised by reviewer Ad96, the area chair acknowledges the paper's limitations in its experimental and makes the final decision.\"}", "{\"title\": \"(Last Call) Extended rebuttal period ending **Today**\\u2013we anticipate your feedback! \\ud83d\\ude42\", \"comment\": \"Dear Reviewer JgHF,\\n\\nThe authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it. **With the extended rebuttal/author discussion period ending *Today*, we sincerely look forward to your final feedback.**\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the rebuttal. I agree that this paper's idea is interesting and could serve as valuable inspiration for further research in this area. While the experiments are relatively small in scale, the scope of the work is already comparable to some previous studies. Based on this, I would like to increase my score for the contribution of this work.\\n\\nHowever, I still have some concerns:\\n\\n1. Regarding the task performance for DDO results in Figure 3(b): For CoLA, the original evaluation metric is Matthews Correlation Coefficient (MCC), but many research papers report results as accuracy. For stsb, even when using Pearson Correlation Coefficient (PCC), the baseline appears extremely low to me. Could you clarify the reason for this significant difference between uniform and DDO weights for stsb task, given that the differences for other GLUE tasks are not as pronounced?\\n\\n2. Another concern I have is with Takeaway 3: \\u201cAUTOSCALE-predicted weights consistently outperform any baseline with a\\n28% to 38% margin and demonstrate advantageous performance on downstream tasks.\\u201d This claim seems overstated and potentially confusing. It gives the impression that there is a 28%\\u201338% performance improvement, whereas it actually refers to the speedup for decreasing validation perplexity. I think the phrasing in the abstract is more accurate and better reflects the findings.\"}", "{\"title\": \"2nd-round responses by authors\", \"comment\": \"The authors sincerely appreciate the reviewer's consideration and value the reviewer's feedback! We strive to consistently improve the manuscript and serve its goal to contribute to the research community.\\n\\n---\\n### Concern 1(a): accuracy vs. MCC\\n\\n**Re:** We fully understand the reviewer's question. In the current literature, **there appears no consensus on the primary evaluation metrics for GLUE tasks.** The original paper proposing the GLUE benchmark, *[GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding]*, lists Matthews corr. as the evaluation metric for \\\"cola\\\", Pearson/Spearman corr. for \\\"stsb\\\", acc./F1 and \\\"mrpc\\\" and \\\"qqp\\\", and accuracy for other tasks. **This is how we chose evaluation metrics in this work.**\\n\\nNonetheless, **we did notice the considerable wide range of choices for evaluation metrics of the GLUE benchmark among published works**, including seminal papers. For example, in Google's original paper proposing the BERT model, *[BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding]*, the evaluation on GLUE benchmark excludes the \\\"wnli\\\" task and reports F1 scores for \\\"qqp\\\" and \\\"mrpc\\\", Spearman correlations for \\\"stsb\\\", and accuracy scores for the other tasks. *[Data Selection for Language Models via Importance Resampling]* reports accuracies for all tasks. **We are neutral about which metric to use and agree that unifying the experiment setup will be beneficial**, which would cause less confusion and allow better comparison between different works.\\n\\n---\\n\\n### Concern 1(b): low baseline performance for \\\"stsb\\\"\\n\\n**Re:** While developing this work, we noticed the performance on task \\\"stsb\\\" demonstrates the phenomenon often referred to as **\\\"emergent ability\\\", which describes the scenarios where the task performance of an LLM improves abruptly in an unpredictable manner while increasing the model size/compute budget** (ref: *[Emergent Abilities of Large Language Models]*). In this work, we observed the performance on task \\\"stsb\\\" increases sharply when the training budget reaches certain thresholds. In the experiment, **for the same training data budget**, DDO-optimized domain weights led to higher training efficiency and the trained model observed a sharp improvement in \\\"stsb\\\" performance. On the contrary, **the model trained with uniform domain weights did not reach the threshold for performance hike**, making the performance discrepancy appear wide.\\n\\nIt was not the intention of this work to design experiments in this specific manner. Rather, as we stressed in the review responses, we note that task performance is generally less predictable and **better used as a qualitative measure.** Task performance does not scale smoothly with computing budget/model size and it is currently an open question whether/how task performance for LLMs can be predicted. Perplexity/loss, on the contrary, scales stably and serves as a better indicator to track the model's training progress/capability. **We recommend considering validation perplexity as the primary metric for efficiency research on LLM pre-training, which allows more precise quantitative comparisons.**\\n\\n---\\n\\n### Concern 2: relative improvements in Takeaway 3\\n\\n**Re:** We appreciate the feedback. **We will revise the manuscript to clearly state that the improvements are in the training efficiency where AutoScale-predicted weights train the model 28%\\u201338% *faster* than baselines.**\"}", "{\"title\": \"Response by Authors (3/3)\", \"comment\": \"## Questions: Linear fitting in Figure 1[a]\\n\\nThe reported $R^2=0.998$ is the averge from all curves. In this work, we fit the proposed AutoScale predictor with optimal domain weights obtained at 0.3B~1.2B data budgets (where only 2 out 3 datapoints are needed for fitting AutoScale). Data budgets smaller than 0.1B tokens are out of the normal range and only provided for qualitative analysis. Due to data sampling and stochasticity from the ML training pipeline, the possible noise in small-scale results may outweigh the benefits of adding these results to quantitative analysis.\\n\\n---\\n\\n## Question: Extrapolation on larger data scales\\n\\nAutoScale is designed to predict optimal domain weights on larger data scales where the direction solution to the data optimization problem cannot be obtained. It is intended used for AutoScale to provide forecasts on optimal data mixtures before the model is trained. Though the intervals on the X-axis are not uniform, the values on the X-axis are the actual data budgets (in billion tokens).\\n\\n---\\n\\n## Question: Relative improvements on non-monotonic curves\\n\\nWe agree that the relative improvements on non-monotonic performance curves have certain ambiguity. In this work, we followed the same practice as in representing works on related problems such as [DoReMi] and [Rephrasing the Web: A Recipe for Compute & Data-Efficient Language Modeling, 2024]. It appears the common practice to define relative performance improvements in this manner. Following the same procedure could allow easier comparison with other results.\\n\\n---\\n\\n## Question: X-axis in Figure 2\\n\\nFigure 2 depicts the results of fitting loss curves with power-law functions for 774M Decoder-only LMs (GPT-2 Large), directly approximating how loss changes with each domain's data quantity. **X-axis depicts the quantity of domain data relative to the original amount before perturbation where 1.0=100% (original data amount), 0.33=33% (changed to 1/3x than before), 3.0=300% (changed to 3x than before).** We have updated the caption in the manuscript.\\n\\n---\\n\\n## Typos, formatting, visualization issues\\n\\nThanks for pointing out these issues in the manuscript. We have revised and fixed all the listed issues together with some others. We are appreciative of the time and effort. We believe this has improved the clarity and presentation of the manuscript.\"}", "{\"title\": \"Extended rebuttal period ends on Monday (Dec. 2)\\u2013we anticipate your feedback! \\ud83d\\ude42\", \"comment\": \"Dear Reviewer JgHF,\\n\\nWith the extended rebuttal/author discussion period ending on Monday (Dec. 2), we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it.\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"Rebuttal period ends soon\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer Qo6P,\\n\\nAs the rebuttal/author discussion period ends in a few days, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it.\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"Response by Authors (2/3)\", \"comment\": \"### D. Computation overhead\\n\\nDDO requires training $2M +1$ models where $M$ is the number of data domains. This is conducted on a data scale **magnitudes smaller** than the target compute budget. AutoScale requires obtaining optimized domain weights with DDO on two different data scales. This allows for predicting the domain weights for training the target model with a much larger data budget. In comparison, the full data budget for 1B-parameter models is 2~3T tokens** (ref: 1.5T tokens in *[OpenELM: An Efficient Language Model Family with Open Training and Inference Framework]*, 2T tokens in *[TinyLlama: An Open-Source Small Language Model]*, 3T tokens in *[olmo: Accelerating the science of language models]*) **where a 0.5% speed-up would justify the cost.** **We are adding this discussion to the manuscript and the limitations.**\\n\\n*Besides, the computation efficiency for AutoScale is advantageous relative to existing works.** In the default setup, in each iteration, DoReMi requires training two models each with 200k steps with token_length 1024 and batch_size 64*8, **which equals >100B tokens and ~10x than AutoScale.** This process needs to iterate multiple times, resulting in a multitude of compute. This has not included the computation for conducting the grid search to determine which proxy model to use. *[Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling]* explores improving LLM pretaining efficiency via training on synthetic texts generated by pre-trained LLMs. **The computation overhead for generating synthetic texts with 7B-parameter models is in the same magnitude as training the target 1B-parameter model**.\\n\\nFurther, as listed in the discussion with Reviewer mxrW, even though the cost of training full-size models on a small proportion of its target data budget should be within a low percentage of the total training cost, **there's room for further improvements.** The next step could be **applying the scaling law solution developed for different data budgets to different model sizes and directly predicting optimal data composition across model scales.** The original paper on scaling laws reports that model performance is empirically predictable with power law functions w.r.t. to either the training data size OR the model size\\u2013these two were treated as equal, independent factors. Theoretically, the methodology we developed in this work could be applied to scaling with the model size as well. This extension could be an independent work parallel to this manuscript. Should the results turn out to be positive, then the next step will be combining these two methods into a unified framework to predict optimal data mixtures for different compute budgets and model sizes at the same time. **We are listing this in future directions and calling for contributions from the community.**\"}", "{\"summary\": \"This work proposes a method called \\u201cAutoScale\\u201d that helps predict the optimal composition of pre-training data for LLMs. It challenges the conventional notion of determining this via small scale experiments and simply applying them to a large scale where two axes change (data scale, parameter count). The experiments show a very promising line of research and it was a pleasure to read.\\n\\nI couldn\\u2019t check the math as well as I would have liked to.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Very strong work in terms of the hypothesis and experimental setup albeit at smaller scales. The promise of finding optimal weights for training large networks without having to guesstimate it is a very attractive proposition.\", \"The plots are really well done. They drive the main idea of the paper very well(especially Fig 1 (a, e) )\"], \"weaknesses\": [\"I would like to list the following weakness fully ensuring the authors that I am not unreasonable and am completely open to increasing my score if these are addressed/answered satisfactorily.\", \"The work proposes using a different approach to finding optimal data weights for a given pre-training compute budget. This is well explained via results but does in fact require training the original size model. Given that we obtain suboptimal performance via the conventional way( smaller model, fewer data), an analysis showing how much performance could be gained by spending the compute and training these (equal parameter) networks would be useful.\", \"For Takeaway 1, Fig 1(b) only has 2 data points. Additional points would help make the case stronger. It\\u2019s a tough sell to make such a bold statement with two data points. But I\\u2019m hoping I am wrong :)\", \"Maybe I missed this, but the repeated claims that Wikipedia should be sampled less at a higher scale is a result of the OLS fit. But no experiment actually confirmed this fact in the paper, right ? Since the max scale was 1.2B ? Please correct me if I\\u2019m wrong.\", \"General Comments/Typos:\", \"[Section2] : \\u201cthis work contribute\\u201d -> \\u201cthis work contributes\\u201d\", \"[Section 3.1] : wi = Ni/N => wi = Si/N ?\", \"[Algorithm 1] : Train the model on data S = ({S1 . . . Sm} \\\\ Si) => S = ({S1 . . . Sm} \\\\ Sj) ?\", \"Some of the font sizes are very distracting to read.\"], \"questions\": [\"Even at a smaller scale, I see opportunities of clear promise where we could have had more points between 0.3B and 1.2B and show some trend. Any specific reason this was not done/ increased to more than 1.2B ? With scale, a lot of problems disappear that are apparent at lower scales.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of predicting optimal data mix for a given compute budget (i.e., fixed total token count and model size). A key challenge here is that the optimal domain weighting may change at different scale, hence it is inaccurate to use smaller models to predict large model performance, while solving the optimization problem at the large model scale directly is computationally infeasible (requires multiple retraining).\\n\\nThe paper proposes a method that work on one domain at a time by fixing the rest of the data constant (hence the loss is constant for other domains too), and estimated a scaling law per domain. The power law parameters $\\\\gamma_i$ and $l_i$ can be easily estimated, which approximate a regular data scaling function where $l_i$ is the irreducible loss of that domain. \\n\\nAfter the power law of each domain is found, the final objective is to mix the data so that the loss is minimize while keeping sum of the tokens reaches the budget, which becomes a convex function that can be solved efficiently. This gives the DDO method. The different $\\\\gamma_i$ explains why there is a differet mix at different stage.\\n\\nA method \\\"AutoScale\\\" is further proposed to obtain the data weight of a larger token budget, by iteratively mxing two data weights at different scale to create the weights of the next one. \\n\\nThe proposed approach is tested on models like GPT-2 (autoregressive) and BERT (bidirectional), showing improved convergence rates and downstream task performance. Empricially, the results show AutoScale\\u2019s ability to shift data weights, favoring diverse data sources like CommonCrawl at larger scales while reducing reliance on traditionally high-quality, standard-format data such as Wikipedia. These findings match the empricial findings of the data weights used for prior succesful models such as Llama.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This paper analyzes an important problem, data weighting of LLM training, which can improve the training efficiency with reasonable cost. It also presents an actionable algorithm for LLM training.\", \"The proposed method assumes a power law formulation which makes the data weighting problem practically solvable. It is important to point out that data weights is scale dependent.\", \"The empirical results and findings on the corpus weighting align with common belief of the community, such that further up-weighting high quality source is less effective, and books and web documents continue to be important at larger scale. This shows that the proposed method has strong explanatory ability.\", \"The experiment is quite thorough, considering the cost for training models is quite high even at small scales.\"], \"weaknesses\": [\"I wonder if carbon footprint of the experiments here should also be reported\", \"The presentation is good but can still be improved. The core method part can be improved with additional intuitive explanations and better use of notations. Further, I have noted down some minor typo/notation errors:\"], \"typo_or_notation\": [\"L258: $N_i^+2$ should probably be $N_i^+$\", \"L526: Ecoder -> Encoder\", \"L200: probably should better use $N_i$ instead of $|S_i|$\", \"L281: $w_i'*N'$ appears twice.\", \"Figure 2 caption: meaning of (original = 1) is a bit unclear\", \"L380: $N$ is missing in the first equation.\"], \"questions\": [\"I am a bit unclear about your definition of \\\"equivalent data size\\\" at L243, what's the equivalence about (i.e., which size and which size)? Note that I understand the meaning of $N_I^0$, just wondering the terminology here.\", \"Maybe I missed something, but how do one control the budget for the next $N^(3)$? It seems the amount of tokens is defined by the initial weights of $N^(1)$ and $N^(2)$. Or in other words, say I need to find a optimal weight for a total token of 300B, how should I start with $N^(1)$ and $N^(2)$?\", \"Adding to the prior question, if the optimal ratio of each domain follows a exponential function, after taking a few data points using AutoScale, can we simply fit the exponential function instead of using the AutoScale iterative method? You seem to be using that in Figure 1 (d). If y es, this simply answer my question above.\", \"While the problem of different data scale is resolved with a scaling law solution, can we also use a similar approach on model scale? Even though the cost of using a small amount of data for a larger model should be within a low percentage of the total training cost, setting up the experiment for the larger scale is non-trivial. It'd be nice to have a function that can predict the loss across model scales.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors address an interesting topic in this paper: a method for automatically optimizing the mixture proportions of pretraining data domains when training language models.\\nThey begin by formulating the optimal mixing problem as a bi-level optimization and then propose the Direct Data Optimization (DDO) algorithm to formalize the relationship between optimal data compositions and training data scales. Using DDO, they conduct empirical studies to optimize domain weights at various training data scales, demonstrating that the optimal data composition varies with the scale of the training data. Finally, they introduce AutoScale, which automatically predicts optimal training data compositions at larger scales based on compositions optimized at smaller scales. \\nAdditionally, their evaluation of AutoScale on both decoder-only and encoder-only models demonstrates its ability to achieve computational savings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. AutoScale presents an interesting idea that distinguishes it from previous work, demonstrating that the optimal weights are only effective at the scale they were optimized for and become suboptimal when applied to other scales. It offers a practical method for automatically and efficiently determining domain weights when train large language models.\\n2. The experiments are conducted on both encoder-only and decoder-only models and shows good results on decoder-only model. \\n3. The work is supported by both empirical experiments and mathematical formulations. Additionally, the diagram in the paper is well-designed and effectively conveys the underlying concepts.\", \"weaknesses\": [\"1. The experimental setup is not entirely convincing:\", \"The models used (a 774M decoder-only model and a 110M encoder-only model) are relatively small compared to today\\u2019s large language models, making it difficult to gauge performance at a larger scale.\", \"The data size is limited to 3B, 5B, and 10B tokens, with results in Table 1 only reflecting the 3B set.\", \"Figure 3(b) lacks explanation, and the cola baseline and DDO performance seems unusually low, falling below random guessing (0.5). Also, stsb baseline seems low too.\", \"2. The evaluation of downstream tasks could be expanded. It would be helpful to see the models' performance on more complex tasks, such as mathematical problem-solving.\"], \"questions\": \"1. If I understand correctly, for the downstream tasks, the evaluation metric used is perplexity. Why is perplexity chosen as the metric instead of one that is specific to the dataset or task itself?\\n2. Is there any potential explanation for why AutoScale doesn't perform as well on encoder-only models compared to decoder-only models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal period ends soon\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer JgHF,\\n\\nAs the rebuttal/author discussion period ends in a few days, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it.\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"Rebuttal period ends in a few days\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer Ad96,\\n\\nAs the rebuttal/author discussion period ends in a few days, we sincerely look forward to your feedback. The authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it.\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_additional_results_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}", "{\"title\": \"(Last Call) Extended rebuttal period ending *Today*\\u2013we anticipate your feedback! \\ud83d\\ude42\", \"comment\": \"Dear Reviewer Qo6P,\\n\\nThe authors are deeply appreciative of your valuable time and efforts spent reviewing this paper and helping us improve it. **With the extended rebuttal/author discussion period ending *Today*, we sincerely look forward to your final feedback.**\", \"it_would_be_very_much_appreciated_if_you_could_once_again_help_review_our_responses_and_let_us_know_if_these_address_or_partially_address_your_concerns_and_if_our_explanations_are_heading_in_the_right_direction\": \")\\n\\nPlease also let us know if there are any further questions or comments about this paper. We strive to consistently improve the paper and it would be our pleasure to have your precious feedback!\\n\\nKind Regards,\\\\\\nAuthors of Submission1690\"}" ] }
53xxT3LwJB
NN-ResDMD: Learning Koopman Representations for Complex Dynamics with Spectral Residuals
[ "Yuanchao Xu", "Kaidi Shao", "Nikos K. Logothetis", "Zhongwei Shen" ]
Analyzing long-term behaviors in high-dimensional nonlinear dynamical systems remains a significant challenge. The Koopman operator framework has emerged as a powerful tool to address this issue by providing a globally linear perspective on nonlinear dynamics. However, existing methods for approximating the Koopman operator and its spectral components, particularly in large-scale systems, often lack robust theoretical guarantees. Residual Dynamic Mode Decomposition (ResDMD) introduces a spectral residual measure to assess the convergence of the estimated Koopman spectrum, which helps filter out spurious spectral components. Nevertheless, it depends on pre-computed spectra, thereby inheriting their inaccuracies. To overcome its limitations, we introduce the Neural Network-ResDMD (NN-ResDMD), a method that directly estimates Koopman spectral components by minimizing the spectral residual. By leveraging neural networks, NN-ResDMD automatically identifies the optimal basis functions of the Koopman invariant subspace, eliminating the need for manual selection and improving the reliability of the analysis. Experiments on physical and biological systems demonstrate that NN-ResDMD significantly improves both accuracy and scalability, making it an effective tool for analyzing complex dynamical systems.
[ "Koopman operator", "data driven dynamical system", "dynamic mode decomposition" ]
Reject
https://openreview.net/pdf?id=53xxT3LwJB
https://openreview.net/forum?id=53xxT3LwJB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmU1JVBCZN", "xSesikiud0", "u5BfegzTuT", "gOdhp66SOA", "fzUco0w98j", "cb1sCYB3tm", "ahGVe8ngxz", "XHN54FEfRz", "X6SqsHLFAK", "UT8vqQVcv2", "TPiaiqZd3G", "PZPH0aRQBs", "JVPhqhSzvy", "JJokj3rsf6", "BKGRaw0XOy", "AVRAJoHkhO", "99jkHQ5wRe", "3KUnzQeoaw", "34bxu0bszV", "0sSSTFPsAg", "0FJpYs5UN0" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1729402592656, 1732602718978, 1732097029560, 1732708076152, 1732529209197, 1732218655270, 1730449296923, 1732220140548, 1729459785023, 1732252378334, 1732446956547, 1732585742049, 1732220162376, 1732098807384, 1732095865128, 1732613542181, 1737523831059, 1732691382298, 1732379095745, 1730534396367, 1733890123420 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_iMTa" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_GfSg" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_iMTa" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_BKBw" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_GfSg" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_BKBw" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_BKBw" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Authors" ], [ "ICLR.cc/2025/Conference/Submission7305/Reviewer_HKiW" ], [ "ICLR.cc/2025/Conference/Submission7305/Area_Chair_jcyr" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose Neural Network-ResDMD (NN-ResDMD), where the dictionary functions are automatically selected by neural networks. The network is trained by minimizeing the loss function that is related to the residual of the eigenvalue problem of the Koopman operator. Numerical results are also illustrated to confirm the behavior of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"ResDMD is a powerful tool to observe the spectra of Koopman operators. The authors combine neural networks and ResDMD to make ResDMD more flexible method. The topic is interesting and relevent to the community. The paper is well-organized and easy to follow.\", \"weaknesses\": \"The authors insist their proposed method automatically select basis functions, eliminating the need for manual intervention. I understand that in practice, applying neural networks to find proper basis functions is more flexible. However, theoretically, advantages of applying neural networks to estimating Koopman operators and their eigenvalues are not clear for me. They assume there is a basis function $\\\\psi_i$ and construct a neural network that can sufficiently approximate $\\\\psi_i$. Could you clarify what is $\\\\psi_i$? And, what impact on estimating Koopman operators and their eigenvalues is induced by the error of the approximation of $\\\\psi_i$ by using the neural network?\", \"questions\": \"In Fig. 4, the spectrum of the estiated Koopman operator by NN-ResDMD is distributed on the unit circle. On the other hand, the spectra of the estimated Koopman operators by other methods are distributed also inside the unit circle. Is the dynamical system measure preserving, or did you only focus on the spectra on the unit circle?\", \"minor_comment\": \"In line 208, do you mean $(G+\\\\sigma I)^{-1}$ instead of $\\\\dagger$ since $G$ is positive-semi definite?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I would like to thank the authors for taking into account the comments and suggestions I made. The clarity and quality of the paper has improved in my opinion, especially the numerical experiments section. This also clarifies how the proposed approach can be advantageous in practice.\", \"small_comments\": \"- The comments made by the authors to clarify the pendulum experiments should also be added to the manuscript.\\n\\n- The discussion of computational cost is good, but I believe it could be improved further by adding a comparison to the other methods discussed in the paper (ResDMD, EDMD, HankelDMD), especially by mentioning the additional cost of the operations in NN-ResDMD versus ResDMD.\\n\\n\\n\\n\\n**While I believe the quality and strength of the paper has improved, I still do not think it makes a sufficiently strong contribution.** Possible additions for a stronger contribution are only acknowledged while I believe some of them could or should have been explored in the current paper. \\n\\n**I am updating my overall rating from a 3 to a generous 5, but I would have rated it a 4 if this had been an option.**\"}", "{\"comment\": \"Thank you for your helpful comments and for allowing us to clarify the novelty of NN-ResDMD and its distinctions from existing work.\", \"regarding_our_innovation\": \"our key innovation lies in transforming spectral residuals into a practical tool for refining Koopman spectral components through iterative application. This process enables more accurate spectral analysis and naturally motivates the use of neural networks for dynamic basis optimization. Unlike the original ResDMD, which passively evaluates pre-computed eigenpairs, our method integrates spectral residuals into an iterative but active filtering framework, directly improving the computation of Koopman spectra and addressing challenges in spectral accuracy that previous methods could not resolve.(**See lines 169-177, highlighted in red**)\\n\\n\\nOur method takes a fundamentally different approach from existing deep learning methods by building upon the residual-based framework of ResDMD rather than the different Koopman-approximating loss functions following the variational principles of VAMPnets or the deep autoencoder structure in Lusch et al. By incorporating spectral residual measures into deep learning and introducing a structured representation that captures dependencies among eigenvalues, we achieve more compact and interpretable models for nonlinear systems with continuous spectra. This approach enables us to directly minimize Koopman spectral approximation errors while avoiding the high-dimensional representations or point-spectrum limitations of previous methods. These are the main contributions of our framework.\\n\\nThe proposed loss function and the VAMP score share the goal of optimizing approximations of the Koopman operator's spectral properties, establishing a connection in their ultimate purpose. However, although they both depend on the covariance matrices (in our manuscript Equation 3.2), their methodologies differ significantly. Our residual-based method directly minimizes the spectral approximation error of the Koopman operator and accommodates both point and continuous spectra, while the VAMP score follows a variational framework, maximizing the sum of singular values to approximate the point spectrum, primarily for stochastic systems. Moreover, while VAMP is specifically designed for Markov processes and requires the Koopman operator to be Hilbert-Schmidt, our approach focuses on deterministic systems and enables a more comprehensive spectral analysis that incorporates continuous spectra. This distinction in scope and methodology highlights how the two frameworks complement each other in addressing different aspects of spectral estimation.\\n\\nWe have added the above discussion of NN-ResDMD and VAMP in Appendix section A.4 and mentioned it in **line 484 highlighted in cyan**.\"}", "{\"comment\": \"Thank you for your response. After reading the rebuttal, I'd like to keep my score.\"}", "{\"comment\": \"Thank you for your detailed review and for reconsidering the value of our work. We appreciate your insights, and we are grateful that you took the time to engage deeply with our manuscript and references. Below, we address your additional comments:\\n\\n1. We appreciate the theoretical insights provided by the KVAD framework, which offers an interesting alternative perspective on addressing the limitations of VAMP for deterministic systems. We incorporated relevant discussions and comparisons with this work in our manuscript (lines 865-866, highlighted in blue). While KVAD demonstrates an elegant variational approach using kernel embedding, its numerical experiments are limited to low-dimensional systems (2D Van der Pol oscillator and 3D Lorenz system). In contrast, our NN-ResDMD method has been validated not only on toy models but also on high-dimensional real-world applications, including turbulence systems (~30,000 spatial dimensions) and neural recordings (>7,000 neurons), demonstrating its practical scalability and effectiveness.\\n\\n3. Without considering the challenges inherent to neural network optimization, ResDMD itself provides a solid theoretical foundation to guarantee the recovery of the entire spectrum including eigenvalues. The importance of the non-trainable basis is indeed a crucial aspect of our approach, as it helps prevent scenarios where a poor initialization could result in the spectral residual being zero. We acknowledge that this is fundamentally a machine learning challenge: starting from a suboptimal initialization and converging quickly to a poor local minimum can lead to undesirable eigenvalue estimates. Conversely, achieving a solution closer to the global minimum would significantly enhance the quality of the identified eigenvalues. However, the optimization of neural networks is a broader challenge and falls outside the primary scope of this paper. While it is a promising direction for future work, our focus in this study is on leveraging the ResDMD framework\\u2014which has theoretical guarantees for detecting the entire spectrum\\u2014and making it more practical and applicable through the integration of neural networks.\"}", "{\"comment\": \"Thank you for your thorough and detailed review. We appreciate very much for your time and helpful comments/suggestions and would like to address your concerns in the following aspects:\\n\\n\\n**Contribution**\\nOur key innovation lies in transforming spectral residuals into a practical tool for refining Koopman spectral components through iterative application. This process enables more accurate spectral analysis and naturally motivates the use of neural networks for dynamic basis optimization. Unlike the original ResDMD, which passively evaluates pre-computed eigenpairs, our method integrates spectral residuals into an iterative but active filtering framework, directly improving the computation of Koopman spectra and addressing challenges in spectral accuracy that previous methods could not resolve (**See lines 169-177, highlighted in red**).\\n\\n\\n**Neural Network Architecture and Innovation:** \\nOur network is a three-layer Feedforward Network and the layer size can be defined manually before every training to adapt to each task. The activation function for each hidden layer is the tanh function. During training, we use the Adam optimizer for updating the network parameters. We have added the structure details in **lines 247-248 (highlighted in green)** of the main text.\\n\\n\\nWhile we acknowledge that different neural network architectures could be explored, we deliberately used a simple feedforward network to demonstrate that even basic architectures can achieve significant improvements. The choice of network architecture is secondary to our main contribution of establishing the optimization framework. However, we appreciate the suggestion and will explore more advanced architecture in future work to further enhance performance and robustness. We have clarified these in **lines 481-483 (highlighted in green)**.\\n\\n\\nWhile we appreciate the reviewer's valuable suggestion to extend our work to PINNs/PINOs, we believe such extensions are beyond the scope of this study for several reasons. First, the integration of PINNs/PINOs requires additional modifications to the framework, including embedding physical constraints directly into the learning process, which involves significant methodological and computational changes. Second, implementing and validating these extensions would require a thorough exploration of appropriate physical constraints and regularization techniques, as well as extensive experiments to ensure fair comparisons. Finally, the primary focus of this work is to establish an optimization framework for Koopman spectrum estimation, and introducing PINNs/PINOs would shift the focus away from our core motivation. Nonetheless, we agree that integrating PINNs/PINOs into the Koopman framework is a promising direction, and we plan to investigate this in future work to further enhance the applicability of our approach.\"}", "{\"summary\": \"The authors propose NN-ResDMD, a deep learning-based approach that directly estimates Koopman spectral components by minimizing a spectral residual. This method aims to improve the reliability of approximating Koopman spectra in nonlinear dynamical systems by automatically identifying optimal basis functions for the Koopman invariant subspace. The paper presents experiments on physical and biological systems, demonstrating the method's scalability and accuracy for complex dynamics.\\n\\nMy main comment on this paper is that it lacks sufficient innovation or fails to effectively demonstrate its unique contributions. The use of deep learning to estimate Koopman operators has been explored extensively in prior research. For example:\\n\\n[1] Lusch, B., Kutz, J. N., & Brunton, S. L. (2018). Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 9, Article 4950.\\n[2] Mardt, A., Pasquali, L., Wu, H., & No\\u00e9, F. (2018). VAMPnets for deep learning of molecular kinetics. Nature Communications, 9, Article 5.\\n[3] Mardt, A., Pasquali, L., Wu, H., & No\\u00e9, F. (2020). Deep learning Markov and Koopman models with physical constraints. Proceedings of Machine Learning Research, 107, 451-475.\\n\\nThe squared relative residual proposed in this paper has similarities to the VAMP-E score explored by Wu and No\\u00e9 in their work on VAMP [4]. The VAMP score framework has served as a basis for many deep learning models, including VAMPnets [2], state-free reversible VAMPnets and GraphVAMPnets. It would be beneficial for the authors to position their spectral residual measure within this established framework, providing a comparative analysis or highlighting any differences in formulation or performance.\\n\\n[4] Wu, H., & No\\u00e9, F. (2020). Variational approach for learning Markov processes from time series data. Journal of Nonlinear Science, 30, 23-66.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed NN-ResDMD method offers a deep neural network based approach for estimating Koopman spectral components\", \"weaknesses\": \"See the Summary\", \"questions\": \"(1) How does this method differ from existing deep learning approaches for Koopman operator estimation, and what substantial improvements does it offer in terms of robustness, accuracy, or efficiency?\\n\\n(2) What are the differences and connections between the proposed loss function in this paper and existing evaluation functions like the VAMP score?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Computational Cost:**\\nWe appreciate the reviewer's thoughtful comment regarding the computational cost of NN-ResDMD compared to classical ResDMD. While it is true that the computational cost of NN-ResDMD is much higher, we would like to clarify a key misunderstanding in the reviewer's reasoning. In NN-ResDMD, the evaluation of Koopman eigenpairs is not performed explicitly at each iteration of the optimization process. Instead, the loss function is defined based on the Koopman matrix and the dictionary generated at each iteration. The residual is automatically determined by this formulation, which eliminates the need for repeated explicit evaluations of eigenpairs during optimization. Gradient descent is then applied using Adam to minimize this loss, making the optimization process distinct from the classical ResDMD approach, where eigenpair evaluations are directly involved.\\n\\n\\nThe NN-ResDMD algorithm's computational demands stem primarily from its iterative optimization process. Each iteration involves a gradient descent update with complexity scaling linearly with both system dimensionality and neural network parameters. Though individual gradient steps are computationally lightweight for standard network architectures, the algorithm's efficiency issue lies in its repeated least-squares optimizations. Compared to standard single least-squares computation as in most numerical algorithms, NN-ResDMD requires multiple iterations to achieve convergence, with stochastic gradient descent methods showing a theoretical $O(1/n)$ convergence rate (See [1]). However, the method's nonlinear optimization nature also presents challenges for establishing concrete convergence bounds and error estimates.\\n\\n\\nEmpirically, without computing the pseudospectrum, the computational cost of ResDMD in our experiments typically ranges from seconds to minutes. In contrast, NN-ResDMD can take anywhere from tens of minutes to several hours, depending on factors such as data dimensionality, the number of snapshots, hidden layer configurations, dictionary sizes, and training convergence criteria. \\nWe acknowledge that NN-ResDMD involves additional computational steps due to its optimization process, particularly when employing large neural networks. However, these additional steps enhance the accuracy and robustness of Koopman eigenpair estimation, making the trade-off worthwhile. Nevertheless, the higher computational demands make NN-ResDMD less suitable for real-time or online Koopman model learning tasks.\\n\\nWe have added discussion on this topic in **lines 267-275** and **Appendix A.5 (highlighted in green)**.\\n\\n[1] F. Bach and E. Moulines, \\u201cNon-strongly-convex smooth stochastic approximation with convergence rate O(1/n),\\u201d in Advances in Neural Information Processing Systems (2013) pp. 773\\u2013781.\\n\\n\\n**General limitations:**\\nThank you very much for this suggestion and we have included now **a paragraph in the Conclusion section** to demonstrate the limitations of our proposed approach (**highlighted in green**). \\n\\n\\n**Experimental Results:**\", \"we_appreciate_your_feedback_on_experimental_clarity_and_address_specific_points_below\": \"1. **Pendulum Results:** \\nI understand your concern about the clarity and comparison of the pendulum results. The ground truth for this Hamiltonian system is indeed the unit circle, as the continuous spectrum and eigenvalues should lie on it. The shaded areas in the NN-ResDMD results represent the pseudospectrum, which is a key feature of our method that can capture the whole spectrum, unlike other methods that only show eigenvalues as points. While the shaded area may appear broad, this actually demonstrates our method's ability to detect the complete spectrum. This width of the shaded region accounts for computational uncertainties, as exact spectrum computation is computationally impossible. Theoretically, ResDMD guarantees that as this error tolerance approaches zero, the pseudospectrum converges to the true spectrum (the unit circle in this case) without spectral pollution. The Hankel-DMD results, though showing points near the unit circle and containing some polluted eigenvalues, only capture the point spectrum and miss the full spectral information. We revised the content to make this distinction clearer and better explain the theoretical significance of the pseudospectrum visualization in **line 327 highlighted in green**.\"}", "{\"summary\": \"This paper introduces a new method, NN-ResDMD, to estimate the spectral components of Koopman operators. It builds upon Residual Dynamic Mode Decomposition (ResDMD) and uses a neural network to identify basis functions instead of manually selecting them.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The description and clarity of the proposed approach is good overall.\", \"The proposed approach is theoretically justified by the guarantees of the ResDMD method it is built upon.\", \"The clustering results on the neural dynamics experiment are promising.\", \"The code for the experiments is already available online.\"], \"weaknesses\": \"**I do not think this paper makes a sufficiently strong contribution**. My understanding is that the proposed approach is simply ResDMD where the basis functions are parametrized by a neural network instead of specified manually, and then iteratively optimized. In my opinion this paper is better suited as a workshop paper in its current form.\\n\\nThe authors use feedforward neural network but the details of the architecture of the feedforward neural network are not provided. The authors **did not investigate the use and choice of other typically better-performing neural network architectures**. I don't think this should be left as future work but should already have been investigated. Similarly, exploring the direction of integrating the proposed approach with PINNs/PINOs would have made the contribution of the paper stronger, instead of leaving it as future work.\\n\\nThere is **no discussion of the computational cost** of the approach, which could be a big drawback of the proposed approach. As far as I understand it, the single evaluation (or few evaluations) of the eigenpairs in ResDMD is replaced by an iterative process where each iteration requires the evaluation of the eigenpairs in NN-ResDMD. This is an expensive part of the algorithm and there are no guarantees for the number of iterations required, so the running time there could be many times larger than for the classical ResDMD. In addition, there is also the cost of the additional optimization steps which can be very large if the neural networks are also large. Overall, the proposed algorithm seems significantly slower than existing approaches, so a trade-off between accuracy and computational time needs to be very carefully discussed theoretically and empirically.\\n\\nMore generally, the **limitations of the algorithm are not discussed**.\\n\\n**The results of the experiments are not clear**\\n- The results of the pendulum experiment are not clear. Need to specify more precisely and explicitly what the ground truth is to understand if these are good/bad results. Why are the results of the NN-ResDMD approach shaded areas while all the other methods are displayed using points? The shaded area also has a large radius, so maybe the results are not as good as stated compared to Hankel-DMD for instance for which the points remain close to the unit circle.\\n- The results of the turbulence experiments are not clear, since there is no ground truth provided. I am not familiar with that experiment so I do not know what the results are supposed to look like. It is unclear to me why the NN-ResDMD results in Figure 5 are considered good, while those in Figure 7 for Hankel-DMD are considered bad.\\n- The results of the neural dynamics experiments could be made clearer. Why do you choose a different number of eigenfunctions for the different approaches? Is this a fair comparison? Figure 6.a., 9.a. and 10.a. show the decomposed eigenfunctions for the different approaches. There is no ground truth provided, so it is not clear to me what we learn from these plots.\\n\\nGiven that the proposed approach is compared to Hankel-DMD in all the numerical experiments, it would be worth detailing what that approach actually does compared to the other DMD approaches discussed in the paper. Maybe that was the original aim of Appendix A.5 which has been left empty.\\n\\nThe diagram in Figure 1 needs to be cleaner and more \\\"professional\\\"\\n\\nMake sure to specify the variables over which the optimization is performed in equations (3.4) and (3.5)\", \"questions\": \"A collection of questions and suggestions have been made in the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\nThank you for your valuable feedback. We have addressed each of your individual comments with corresponding color-coded responses. Additionally, we made some modifications (highlighted in purple) after considering all feedback comprehensively. Specifically, the changes in Algorithm 1 now explicitly include the pseudospectrum computation and Koopman matrix estimation, from which eigenvalue-eigenfunction pairs can be directly derived, thus better emphasizing the core advantage of ResDMD methodology. We have also rephrased some text (the purple-highlighted paragraphs) to a more concise version to maintain the 10-page limit. We appreciate your attention to these changes.\"}", "{\"comment\": \"Thank authors for the detailed response. I have also carefully reviewed the comments from other reviewers along with replies. After further consideration, I have gained a better understanding of the value of the work and decided to slightly increase my score. Below are some additional comments:\\n\\n1) VAMP-like methods applicable to deterministic systems do exist, e.g.,\", \"https\": \"//link.springer.com/article/10.1007/s00332-019-09574-z\\n\\n2) The main issue with the proposed method lies in minimizing the residual in Equation (3.1). While this approach can identify some eigenvalues and eigenfunctions, it does not guarantee that the \\\"important\\\" eigenvalues will always be found, nor does it ensure the discovery of N_K \\\"distinct\\\" eigenvalues. I suspect that the optimization results would heavily depend on the initialization, which is likely why authors mentioned the need for a non-trainable basis in Line 246. However, there is no theoretical guarantee that the non-trainable basis will always allow us to identify all the important eigenvalues.\"}", "{\"comment\": \"I don't quite understand the statement: \\\"ResDMD itself provides a solid theoretical foundation to guarantee the recovery of the entire spectrum, including eigenvalues.\\\" Theoretically, if all the eigenvalues are found, it naturally ensures that the loss function is minimized. However, the reverse is not necessarily true. When the loss function is minimized, it is possible that only part of the eigenvalues have been identified.\"}", "{\"comment\": \"2. **Turbulence Results:**\\nWe apologize for any lack of clarity in presenting the turbulence experiment results and appreciate the opportunity to provide more details. The ground truth in the first plot of Figure 5 represents the pressure field distribution around an airfoil, with spatial dimensions of approximately 30,000. This high-dimensional pressure field exhibits a clear spatial separation between the upper and lower surfaces of the airfoil. \\n\\nOur NN-ResDMD method successfully captures this pressure field structure in its first Koopman mode (the second plot in Figure 5), corresponding to the smallest residual value over all computed eigenpairs. This demonstrates our method\\u2019s ability to identify physically meaningful patterns in high-dimensional fluid systems. Specifically, the first Koopman mode accurately reproduces the spatial pattern observed in the ground truth pressure field. \\n\\nIn contrast, while Hankel-DMD is theoretically well-founded and has shown excellent performance in many high-dimensional systems, its results (shown in Figure 7) fail to capture the fundamental pressure field structure. The Koopman modes corresponding to the smallest residuals in Hankel-DMD do not reproduce the clear spatial separation pattern seen in the ground truth.\\nWe have revised **Section 4.2** to clarify the purpose of the experiment, the ground truth and the interpretations(see **lines 374-386 highlighted in green**). We also added a description of the experiments conducted with Hankel-DMD in Appendix A.7.1(see **lines 924-927 highlighted in green**) and **line 394 in the main text (highlighted in green)** .\\n\\n3. **Neural Dynamics Results:** \\nWe would like to thank the reviewer for raising this point. In addition to the proposed NN-ResDMD method, we also applied three typical Koopman mode decomposition methods which are suitable for high-dimensional datasets to the neural dynamics dataset, which are pre-processed with SVD to a reduced dimension of 300. The reasons of choosing the dictionary size (i.e. the number of Koopman eigenfunctions) are the following:\\n\\n (a).For NN-ResDMD, we chose 300 trained basis and 300 first-order monomial basis as the dictionary for the 300 reduced observables because we think we need dictionary rich enough to span the Koopman invariant subspace, thus the size of trained dictionary should be at least the same as the original observable size. Then based on the rank of estimated Koopman eigenvalues, we select the dominant 501 eigenfunctions to avoid the eigenfunctions with zero eigenvalues.\\n\\n (b).For Hankel DMD, the number of delays (as dictionary size/number of eigenfunctions) is first constrained by the temporal sample size (i.e. snapshot size) because it cannot exceed the maximum snapshot size. Therefore, it is impossible to choose the same dictionary size as the NN-ResDMD example. Then choosing the delay too small with result in insufficient dictionary size to span the Koopman invariant subspace, and too large with reduce the actual snapshot size to estimate the covariance matrices in the estimation of Koopman matrix. Therefore, we chose a compromised delay number 50 that satisfies both needs.\\n\\n (c).For Hankel DMD, the number of delays (as dictionary size/number of eigenfunctions) is first constrained by the temporal sample size (i.e., snapshot size) because it cannot exceed the maximum snapshot size. Therefore, it is impossible to choose the same dictionary size as the NN-ResDMD example. Choosing the delay too small will result in an insufficient dictionary size to span the Koopman invariant subspace, and too large will reduce the actual snapshot size to estimate the covariance matrices in the estimation of the Koopman matrix. Therefore, we chose a compromise delay number of 50 that satisfies both needs.\\n\\n (d).For Kernel ResDMD, the dictionary size is theoretically determined to be the number of snapshots. Therefore, we cannot make the dictionary size consistent with the NN-ResDMD example.\\n\\nBased on the above justifications, we believe our choices of dictionary sizes are reasonable and ensure a fair comparison across the methods. We have added these details in **Appendix Section A.9** to justify our choices to the readers and mentioned them in **lines 433-434 (highlighted in green)**.\\n\\n\\nWhile there is no ground truth for how eigenfunctions should behave in these empirical data, the trial labels (e.g., colored bars in Figures 6a, 9a, and 10a) serve as a reference for evaluating method performance. Eigenfunctions estimated by NN-ResDMD exhibit clear differentiation across trial labels even through visual inspection, demonstrating its ability to capture distinct dynamic patterns. This differentiation validates NN-ResDMD\\u2019s utility for high-dimensional, complex datasets. We pointed out this differentiation in **lines 441 of the main text (highlighted in green)**.\"}", "{\"comment\": \"We sincerely thank the reviewer for the constructive feedback. The questions and doubts mentioned in the **Weakness** and **Question** parts will be answered in the following:\\n\\n(1) Regarding the question from **Weakness**: The $(\\\\psi_i)_{i=1}^N$ is a set of basis functions that span the Koopman invariant subspace. Common choices are polynomials, Fourier basis, RBF functions, etc. Typically, if you choose different basis functions, the results could vary hugely. However, the optimal choice of basis functions is usually unknown a priori and depends heavily on the specific dynamical system. So, we try to learn the optimal basis functions directly from data by parametrizing the basis functions with neural networks instead of manually selecting them. Next, the error in $\\\\psi_i$ directly impacts the finite-dimensional projection of the Koopman operator. However, our method controls this through spectral residual minimization (Equation 3.3), which ensures the learned basis functions adequately capture the Koopman dynamics. Theoretically, this is justified because the neural network acts as a universal approximator in the Barron space (as discussed in Appendix A.3). We have revised the main text (**lines 83, 88-90, 161-163, highlighted in blue**) to clarify this point.\\n\\n(2) Regarding the question from **Question**: The pendulum system in Fig. 4 is indeed measure-preserving due to its Hamiltonian nature, which theoretically implies that the whole spectrum including all eigenvalues should lie on the unit circle, i.e., $\\\\|f\\\\|_2 = \\\\|\\\\mathcal{K}f\\\\|_2=\\\\|\\\\lambda f\\\\|_2=|\\\\lambda|\\\\|f\\\\|_2 \\\\Rightarrow |\\\\lambda|=1$. The fact that other methods show eigenvalues inside the unit circle doesn't reflect the true dynamics but rather indicates numerical approximation errors. This highlights a key advantage of NN-ResDMD: By minimizing the spectral residual from ResDMD method directly, NN-ResDMD better preserves the property of the spectrum, which correctly identifies that the whole spectrum(eigenvalues + continuous spectrum) should lie on the unit circle. Traditional methods like EDMD and Hankel-DMD can not handle the continuous spectrum and also generate spurious eigenvalues due to discretization errors, which results in eigenvalues incorrectly appearing inside the unit circle. We have included this property of the pendulum system in **lines 308-309** in the main text (**highlighted in blue**).\\n\\n(3) For the **minor comment**, the reviewer is correct and we would like to thank him for his helpful correction. We have revised it accordingly in **line 217 (highlighted in blue**) in the main text.\"}", "{\"comment\": \"Thank you for reviewing our paper and for the insightful comments and suggestions. We welcome further discussion to refine our work. Now let us answer the questions you mentioned:\\n\\nTo Question (1):\\nOur key innovation lies in transforming spectral residuals into a practical tool for refining Koopman spectral components through iterative application. This process enables more accurate spectral analysis and naturally motivates the use of neural networks for dynamic basis optimization. Unlike the original ResDMD, which passively evaluates pre-computed eigenpairs, our method integrates spectral residuals into an iterative but active filtering framework, directly improving the computation of Koopman spectra and addressing challenges in spectral accuracy that previous methods could not resolve.(**See lines 169-177, highlighted in red**)\\n\\nTo Question (2):\\nWe appreciate your suggestions on these relevant works and have incorporated references to better contextualize our research within the broader literature (please check **lines 479-484, highlighted in red**).\\n\\nTo Question (3):\\nThe key distinction between EDMD and NN-ResDMD in terms of eigenvalue convergence lies in their theoretical approaches and guarantees. In Section 5.3, EDMD proves weak spectral convergence by showing that for any sequence of eigenvalues $\\\\lambda_N$ of $K_N$ with associated normalized eigenfunctions $\\\\phi_N$, there exists a subsequence converging to an eigenvalue-eigenfunction pair of $K$, provided the weak limit of the eigenfunctions is nonzero. This convergence is established under the assumption of a bounded Koopman operator. In contrast, NN-ResDMD, building upon ResDMD's framework, approaches spectral convergence through the minimization of spectral residuals, which provides a more direct and practical way to identify genuine spectral components. This residual-based approach not only requires weaker assumptions (only closed and densely defined operators) but also naturally handles both point and continuous spectra, which effectively filters out spurious eigenvalues. Furthermore, the residual-based convergence in NN-ResDMD also offers a quantifiable measure of approximation quality for each spectral component, which is not available in EDMD's weak convergence framework. We also have added an explanation of these distinctions in the **lines 224-225, 229-230 highlighted in red** of the main text. \\n\\nTo Question (4): \\nThe alternating optimization strategy in NN-ResDMD separates the least-squares solution for $ K $ from the gradient-based update for $ \\\\theta $, ensuring computational efficiency and numerical stability. This approach guarantees $ K $ is the optimal least-squares solution at each iteration, which allows the optimization to focus entirely on refining $ \\\\theta $. In contrast, methods like Takeishi et al. (2017) and Otto \\\\& Rowley (2019) integrate the least-squares step into the gradient computation, which results in a unified but tightly coupled framework. While this coupling has its merits, it can introduce challenges such as increased complexity in optimization for incorporating prior knowledge or theoretical constraints. By decoupling these steps, NN-ResDMD aligns naturally with spectral residual minimization, facilitating accurate refinement of both point and continuous spectra without additional optimization complexity. This explicit separation also enhances numerical stability and adaptability, which makes NN-ResDMD particularly effective for analyzing complex dynamical systems. We have addressed this in **lines 243-245 highlighted in red** in the main text.\\n\\nTo Question (5): \\nOur NN-ResDMD and EDMD-DL share similar algorithmic structures, which naturally arise from the Galerkin approximation framework common to DMD-based methods, notably in the formula for updating matrix $K$. However, the fundamental distinction lies in their theoretical foundations and objectives. EDMD-DL minimizes a Frobenius-norm-based loss function to optimize the least-squares approximation of the Koopman matrix, which follows the traditional EDMD framework. In contrast, NN-ResDMD is built upon ResDMD's theoretical foundation, which minimizes the spectral residual loss and directly evaluates how well the computed eigenpairs satisfy the spectral properties of the Koopman operator. The $K$ matrix update formula, while appearing similar in both methods, serves different purposes: in EDMD-DL it represents the optimal least-squares solution, while in NN-ResDMD it ensures the minimization of spectral residuals. This distinction in the theoretical foundation leads to fundamentally different learning behaviors and better spectral approximation properties in NN-ResDMD.\\n\\n[1] M. Korda and I. Mezi\\u0107, \\\"On convergence of extended dynamic mode decomposition to the Koopman operator,\\\" Journal of Nonlinear Science, vol. 28, pp. 687\\u2013710, 2018.\"}", "{\"comment\": \"Yes, you raise a valid point. This limitation primarily arises from our use of neural networks, which introduces inherent optimization challenges such as getting trapped in poor local minima due to unfavorable initialization. Neural network implementation can affect the completeness of eigenvalue identification through these optimization issues. In practice, both the neural network initialization and the choice of dictionary size play crucial roles in determining how many eigenvalues can be effectively captured.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your comment and for acknowledging the improvements in the clarity and quality of our manuscript. We appreciate your updated evaluation and your suggestions for further refinement.\\n\\n1. **Pendulum Experiments**: \\n The clarifications we provided regarding the pendulum experiments have now been incorporated into the manuscript, as per your suggestion. This ensures that readers can better understand the significance of the results and their practical implications. It is highlighted in lines 325-338 in green color.\\n\\n2. **Computational Cost Comparison**: \\n We have added a detailed discussion on the computational costs of NN-ResDMD compared to other methods, including EDMD, EDMD-DL, ResDMD, and Hankel-DMD. This section highlights the additional costs introduced by NN-ResDMD's iterative optimization and pseudospectrum computation, contrasting them with the computational steps and runtime requirements of the other approaches. This addition aims to address your concern about making the comparison more explicit and comprehensive. It is highlighted in lines Appendex A.5 in lines 885-908 in green color.\\n\\n\\nThank you again for your time and insightful feedback, which have greatly helped us enhance the manuscript. We appreciate your updated rating and hope the revisions align more closely with your expectations.\"}", "{\"comment\": \"**Hankel-DMD**\\nThank you for pointing out this shared methodology. We have added a brief overview to Hankel DMD in **Appendix 7.1 (lines 902-930**, which might also serve as a justification of our baseline method choice, mentioned in the **main text line 303**). We also added a short summary in **lines 301-304** regarding the shared usage of Hankel-DMD.\\n\\n**Figure 1:** \\nThank you for your suggestion for improving Figure 1. We have updated and included it in the main text.\\n\\n\\n**Optimization Variables:** \\nWe would like to clarify that the variables in (3.4) and (3.5) are not explicitly defined because they are for theoretical purposes. The explicit definition of optimization variables ($\\\\theta $) that encompass all trainable parameters in the neural network are defined in Equations (3.7) and (3.8). The optimization process follows standard neural network training practices. Nevertheless, the variables in (3.4) and (3.5) align naturally with the later equations, which clarify the parameter optimization process.\"}", "{\"summary\": \"A method for computing the Koopman spectra of dynamical systems is proposed. It stands on two main ingredients: 1) the use of spectral residual as a loss function, and 2) the use of NNs for constructing observables. Its applications to a pendulum, turbulence, and neural dynamics are presented, and the proposed method is shown to be successful in extracting the Koopman spectra and analyzing the data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method is technically reasonable.\", \"The idea is somewhat new. To my knowledge, using the resDMD objective with neural observables is quite natural but has not been exactly practiced yet.\", \"The experiments nicely demonstrate the utility of the method.\"], \"weaknesses\": [\"(1) Although the work is solid, I do not think the technical contribution is so significant to be included in the ICLR proceedings. The use of neural observables has been known and practiced well in these 8 years or so, and ResDMD has been already well known and discussed recently. The technical contribution of this work inevitably looks incremental.\", \"(2) The rich literature on the use of NNs as DMD observables, other than Li et al. (2017), seems to be overlooked. For example, even restricting the scope to the mere use of NNs for DMD-based analysis (i.e., excluding more applied perspectives such as control), the following papers (and probably more) should be relevant:\", \"N. Takeishi, Y. Kawahara, T. Yairi: Learning Koopman invariant subspaces for dynamic mode decomposition, Advances in Neural Information Processing Systems 30, 2017, pp. 1130\\u20131140\", \"B. Lusch, J. N. Kutz, S. L. Brunton: Deep learning for universal linear embeddings of nonlinear dynamics, Nature Communications, vol. 9, no. 1, p. 4950, 2018\", \"A. Mardt, L. Pasquali, H. Wu, F. No\\u00e9: VAMPnetsfor deep learning of molecular kinetics, Nature Communications, vol. 9, no. 1, p. 5, 2018.\", \"E. Yeung, S. Kundu, N. Hodas: Learning deep neural network representations for Koopman operators of nonlinear dynamical systems, Proceedings of the 2019 American Control Conference, 2019, pp. 4832\\u20134839\", \"S. E. Otto, C. W. Rowley: Linearly recurrent autoencoder networks for learning dynamics, SIAM Journal on Applied Dynamical Systems, vol. 18, no. 1, pp. 558\\u2013593, 2019\", \"O. Azencot, N. B. Erichson, V. Lin, M. W. Mahoney: Forecasting sequential data using consistent Koopman autoencoders, Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 475\\u2013485\", \"H. Wu, F. No\\u00e9: Variational approach for learning Markov processes from time series data, Journal of Nonlinear Science, vol. 30, no. 1, pp.23\\u201366, 2020\", \"D. J. Alford-Lago, C. W. Curtis, A. T. Ihler, O. Issan: Deep learning enhanced dynamic mode decomposition, Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 32, no. 3, p. 033116, 2022\", \"T. Iwata, Y. Kawahara: Neural dynamic mode decomposition for end-to-end modeling of nonlinear dynamics, Journal of Computational Dynamics, vol. 10, no. 2, pp. 268\\u2013280, 2023\", \"I do not think all of them should be included in the reference with detail, but at least the existence of such a rich literature should be mentioned to help readers to better understand the context of the research.\", \"Below are relatively minor technical points that I found unclear.\", \"(3) The authors emphasize the \\\"lack of theoretical guarantee of convergence\\\" of EDMD for several times. In what sense is this \\\"lack\\\" supposed? For example, the work by Korda & Mezi\\u0107:\", \"M. Korda & I. Mezi\\u0107: On convergence of extended dynamic mode decomposition to the Koopman operator, Journal of Nonlinear Science, vol. 28, pp. 687\\u2013710, 2018\", \"discusses the convergence in some sense.\", \"(4) The proposed method alternates between the gradient-based update of $\\\\theta$ and the least squares solution to get $K$. Is there any insight of this choice? I am asking this because it is also possible to include the least squares within the gradient computation, as done in Takeishi et al. (2017); Otto & Rowley (2019) listed above for example.\", \"(5) In the experiment, does EDMD-DL follow the exactly same configuration as NN-ResDMD except for the loss function? (it should.) Please elaborate on this more clearly to make it easier to assess the benefit particular to the proposed method.\"], \"questions\": \"Although there is no specific question that will surely affect my evaluation, some comments if any on the points listed in the Weaknesses section would be highly helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces NN-ResDMD, a method that integrates neural networks into the ResDMD framework to estimate Koopman spectral components for nonlinear dynamical systems. While the approach demonstrates theoretical grounding and some promising experimental results, it faces several critical issues. The paper\\u2019s primary limitation is the lack of substantive novelty. The integration of neural networks into Koopman operator estimation builds on well-established methods without providing significant new insights or practical advancements. While the authors argue for improved spectral accuracy, the results lack clear benchmarks and fail to demonstrate a compelling advantage over existing methods. Additionally, scalability concerns, particularly regarding computational costs and runtime, are insufficiently addressed, limiting the method\\u2019s applicability to large-scale or real-time systems. Although the authors provided clarifications and additional experiments during the rebuttal, they did not fully resolve concerns about the robustness and interpretability of their approach. The technical contributions appear incremental, and the paper lacks a clear articulation of its unique value relative to prior work. Given these limitations, a decision to Reject is warranted.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed several key points raised by reviewers. Concerns about the novelty of the method were met with arguments emphasizing the integration of spectral residuals and neural networks for improved Koopman spectral estimation. However, the justification for the approach\\u2019s distinctiveness remained insufficient, with reviewers noting that the contributions were largely incremental. Questions regarding computational efficiency and scalability were addressed with additional explanations and runtime comparisons, but these responses highlighted rather than mitigated the method\\u2019s computational limitations. The authors also expanded the discussion on related work and clarified technical aspects such as initialization and basis function selection, which improved the presentation but did not substantively strengthen the paper\\u2019s core contributions. Overall, while the authors engaged effectively with the feedback and improved the manuscript in clarity and scope, the responses did not fully resolve concerns about the paper\\u2019s limited novelty and practical impact. These factors influenced the final decision to Reject.\"}" ] }
53kUa92R7J
Loius (Look it up in the Structure): Benchmark and Techniques for Document structure aware LLM based Retrieval
[ "Vineet Kumar", "vishwajeet kumar", "Jaydeep Sen", "Riyaz Ahmad Bhat", "Sachindra Joshi" ]
We thank the reviewers for their valuable feedback. We have decided to withdraw the submission from ICLR after careful consideration.
[ "information retrieval", "llm", "model based retrieval", "document search", "retrieval benchmark", "document structure", "benchmark" ]
https://openreview.net/pdf?id=53kUa92R7J
https://openreview.net/forum?id=53kUa92R7J
ICLR.cc/2025/Conference
2025
{ "note_id": [ "cmBHO4oUGN", "C41NXfbuJX", "6VxH6NPODa", "1zqG0khfSY" ], "note_type": [ "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730347787383, 1731570013431, 1730484387055, 1730100421843 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5983/Reviewer_nRhm" ], [ "ICLR.cc/2025/Conference/Submission5983/Authors" ], [ "ICLR.cc/2025/Conference/Submission5983/Reviewer_NoGp" ], [ "ICLR.cc/2025/Conference/Submission5983/Reviewer_yrYc" ] ], "structured_content_str": [ "{\"summary\": \"Inspired by the way humans often search through the Table of Contents when reading books, this paper introduces a new task: given a query and the Table of Contents (TOC) of a long document, the objective is to retrieve the correct subsection in the TOC that contains evidence to answer the query. The authors introduce and release a new multi-domain dataset, ToCTome, which consists of 18 books across 6 domains. For each subsection, they use the Mixtral 8x7b model to generate questions, forming subsection-query pairs. Additionally, they split the data into training, development, and test sets, and fine-tune Mistral Instruct v0.2 with the LoRA adapter. Experimental results demonstrate that the fine-tuned Mistral Instruct v0.2 achieves an impressive R@1 score of 82.6%, outperforming BM25, dense retrieval models, and the original, non-fine-tuned Mistral Instruct v0.2 on this task. These findings highlight the strong capabilities of LLM in ToC-based retrieval.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The paper introduces a new dataset containing queries and their corresponding subsections, which may contribute to research in document retrieval.\\n2. Constructing the dataset and conducting the experiments required substantial effort.\", \"weaknesses\": \"1. The authors introduce a new problem but do not explain its importance and significance. The purpose of using a large language model (LLM) to locate the subsection relevant to the query is unclear. The authors simply state that this \\\"mimics how humans leverage the structure of a book,\\\" but they don\\u2019t explain why this is necessary. I believe the authors should address this question: \\\"After finding the subsection relevant to the query, what can be achieved?\\\"\\n2. The authors assume that each query can be matched to a corresponding subsection in the Table of Contents. However, in real-world scenarios, many queries may not align with the Table of Contents; in particular, some fine-grained questions cannot be mapped to a specific subsection.\\n3. The proposed dataset and method are designed for queries that correspond to a single subsection. However, some queries may span multiple subsections, which this dataset and method do not account for. This results in oversimplified problem modeling.\\n4. The writing of this paper has several issues: (1) Many sentences do not follow an academic tone, and exclamation marks are used frequently, which is uncommon in academic papers. (2) Detailed experimental results are repeated in the ABSTRACT, INTRODUCTION, and EXPERIMENTS sections.\", \"questions\": \"The questions are listed in the 'Weaknesses' section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their valuable feedback. We have decided to withdraw the submission\\nfrom ICLR after careful consideration.\"}", "{\"summary\": \"This paper proposes a new task for retrieval models: that of finding the relevant section title in a book, given a question. They build a dataset for this task by using PDF parsing to extract the table of contents from books. They generate question for the task through the use of Mixtral.\\n\\nThey then train a model to do well on this task, called Louis. Their model is trained on the training set (also generated with Mixtral) and shows improved performance over other model types, including non-fine-tuned versions of DSI, BM25, and Mistral. They show an error analysis where their model performs better.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The new proposed task is an interesting contribution, and to my knowledge has not been studied before\", \"They use a wide variety of retrieval model architectures, from cross-encoders, to generative retrieval, spare retrieval, and dense retrieval.\", \"Their proposed model does better on the data they generated\"], \"weaknesses\": \"1. There is a lack of important details. How many chunks/passages are there in the corpus for dense retrieval (or BM25)? Does each chunk for these models also get the section, so that the model can use that information? Why are there no long context models evaluations (Gemini/Prolong/even just a 128k Llama)? Does Mistral see the entire book or just the ToC? etc.\\n2. Conceptually, performing ToC retrieval is strictly easier than searching for the specific passage that a question was generated from. From related work on searching for the relevant passage in long documents (see the \\\"Scrolls Benchmark\\\" [1] for task examples) IR models perform pretty well with retrieval (see the LoCo benchmark [2]). It seems something is wrong with the setup if the models are doing this poorly. My guess is that the book is chunked but not given the section headers in the chunks, so they lack that context. Furthermore, BGE is a decent baseline, but pretty weak in comparison - see the MTEB leaderboard where BGE-base (I assume because of the 768 dim vector, but again not described) is ranked #47.\\n3. Overall, the task does not appear very challenging. R@3 is 90% and close to that for many baseline models (DSI, BM25). If BM25 can get nearly the same performance as the proposal model, it does not seem like much of an improvement (and is perhaps not even statistically significant).\\n4. There is no analysis on the false negative information in these questions. It is likely that many questions are answered in several places and that falsely lowers the score. I would guess that if the authors did an analysis of the Recall @ 3 failures that the answer is given in both places. This makes (3) more of an issue and lowers the quality of the dataset.\\n5. The main modeling contribution seems to be that using the same data generation process as making the test set, and then by training on it, makes a model better able to perform this task. This is unsurprising and demonstrated over and over in the literature on every modality. This would be totally fine, if this was the only issue, but I assume that this means that the proposed Louis model cannot do another other tasks, as it is likely overfit for this. \\n\\nOverall, I think the task could be useful for the retrieval community as a useful long-context evaluation. However, I am unconvinced of the datasets quality and there appear to be modeling concerns (or at least a lack of information).\\n\\n[1] Scrolls Benchmark: https://www.scrolls-benchmark.com/tasks\\n\\n[2] LoCo benchmark: https://arxiv.org/pdf/2402.07440v2\", \"questions\": [\"Most questions are in the weaknesses. But some comments about writing:\", \"Citations should be \\\\citep{}, it seems like they are not, and are instead inline\", \"[Minor] It would be nice to see some examples in the appendix, in terms of passage it came from and question generated.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The research paper introduces Loius, a novel retrieval system that leverages the structure of ToC to enhance retrieval processes. By mimicking how humans intuitively use a ToC to navigate books, Loius proposes a fresh retrieval paradigm that contrasts with traditional methods based on keyword or vector-based searches. The paper also presents a new benchmark dataset ToCTome, which comprises 18 books across six diverse domains, providing a robust platform for evaluating ToC-based retrieval systems. The results demonstrate Loius\\u2019s superior performance in accurately locating relevant sections, boasting a Recall@1 score significantly higher than the next best system (DSI) and other baseline models like BM25 and DPR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The innovative use of the ToC to guide the retrieval process introduces a unique approach to document navigation, potentially transforming retrieval strategies for structured documents.\", \"Extensive experiments validate the system\\u2019s efficacy, particularly its ability to outperform traditional retrieval methods.\", \"The creation of the ToCTome benchmark contributes a valuable resource to the research community, fostering further innovation in retrieval systems that understand document structure.\"], \"weaknesses\": [\"The specific application of ToC-based retrieval to books may not generalize to other types of content that lack a clear hierarchical structure, such as unstructured web pages or documents without a ToC.\", \"The technical novelty appears limited primarily to the adaptation of existing retrieval technologies to a new input structure (ToC), potentially limiting the method\\u2019s broader applicative insights.\", \"In Section 5.2, I find that the maximum input length of Loius and DSI are significantly different. It is unclear if this difference may affect the experimental results.\"], \"questions\": \"1. Can the methods developed for Loius be adapted for general retrieval tasks beyond structured documents like books? If so, what modifications would be necessary?\\n2. Why were different maximum input lengths used for Loius and DSI in your experiments? Could this difference affect the comparative performance results reported?\\n3. Given the reliance on a structured ToC, how does Loius handle content variations within sections that might not be accurately reflected by their ToC descriptions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
53MDeiZ9mC
Mitigating Gradient Interference for Efficient Sparse Fine-Tuning of Large Language Models
[ "Yuhang Wu", "Tianyu Xie", "Weizhong Huang", "Xiawu Zheng", "Fei Chao", "Rongrong Ji" ]
Large Language Model (LLM) sparsification plays a crucial role in model compression. Among various methods, training-free approaches are highly efficient but often result in accuracy loss, while full fine-tuning requires substantial computational resources. Recent works have begun exploring sparse Parameter-Efficient Fine-Tuning (PEFT) methods, but lack theoretical guidance. This study presents the first comprehensive theoretical framework for efficient sparse fine-tuning, addressing a critical gap in the literature. Specifically, we identify gradient conflict as the primary issue in PEFT sparse methods, wherein masked pretrained weights and corresponding PEFT weights exhibit competing optimization objectives during fine-tuning, potentially compromising model performance. We theoretically model this phenomenon and identify three key factors influencing the efficacy of fine-tuning in sparsified LLMs: (1) error introduced by weight norms, (2) error composition from PEFT structures, and (3) error accumulation during fine-tuning. Leveraging these theoretical insights, we propose a novel iterative sparse fine-tuning scheme that systematically addresses each identified factor. We implement an iterative process alternating between sparsity and fine-tuning to mitigate accumulated error in single turn of finetuning. We employ pooling instead of low-rank decomposition to reduce error composition from PEFT structures. We apply normalization to PEFT modules during fine-tuning, constraining error values by limiting weight norms while preserving representational capacity. Additionally, we utilize Centered Kernel Alignment based information similarity assessment for adaptive allocation of layer-level sparsity and PEFT parameter quantities, addressing layer-specific redundancy. Empirical evaluation on a 50\% sparse LLaMA-2 7B model demonstrates the superiority of our approach, achieving lossless compression.
[ "Large language models", "Sparse" ]
Reject
https://openreview.net/pdf?id=53MDeiZ9mC
https://openreview.net/forum?id=53MDeiZ9mC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sjmjO2Ygxi", "q1ly4BWBDk", "i8IDoAi9jb", "hdUtG6Gx9p", "PwdvNYvgG1", "58VqPS1ZfG" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1737523394755, 1730149633297, 1730706274620, 1731100382083, 1734480414034, 1731104238876 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission412/Reviewer_FRnz" ], [ "ICLR.cc/2025/Conference/Submission412/Reviewer_JXRk" ], [ "ICLR.cc/2025/Conference/Submission412/Reviewer_eJLj" ], [ "ICLR.cc/2025/Conference/Submission412/Area_Chair_32pE" ], [ "ICLR.cc/2025/Conference/Submission412/Reviewer_3qDY" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper provides a theoretical examination of parameter-efficient fine-tuning (PEFT) for sparse large language models (LLMs). The authors explore Sparse Weight Gradient Interference, identifying large weight norm as a potential main source of loss error in general sparse PEFT methods. Additionally, they suggest that the LoRA structure contributes to loss error in sparse PEFT and propose an alternative PEFT approach consisting of three steps\\u2014pooling, linear transformation, and expansion\\u2014which they argue achieves a tighter upper bound on loss error compared to LoRA. They also raise the possibility of error accumulation over fine-tuning iterations as a further source of loss. Alongside their theoretical insights, the authors present a brief empirical analysis showing that their sparse PEFT method can outperform LoRA across certain benchmarks when combined with the sparseGPT and Wanda LLM pruning techniques.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The theoretical examination of Sparse Weight Gradient Interference in sparse PEFT methods is sound and provides valuable insights.\", \"The proposed method outperforms LoRA in specific method-benchmark combinations.\", \"The improvements apply across different levels of sparsity and sizes of language models.\"], \"weaknesses\": [\"The paper lacks detailed experimental settings, making it unclear if the comparison with the baseline is entirely fair. Key details such as the number of data points used for LoRA, the number of fine-tuning iterations, hyper-parameters, and any tuning performed (particularly for learning rate) are missing.\", \"The empirical evaluation is limited to classification tasks, and additional open-ended generation downstream tasks would strengthen the assessment of the proposed method.\", \"Even on the limited set of reported downstream tasks, improvements are inconsistent, with LoRA outperforming the proposed method on certain model-task combinations, such as Llama2(13b)-wanda.\", \"There is no discussion on result variance; only single values are reported. Given the minor improvements observed, additional experiments with varying random seeds are needed to allow readers to assess the method's efficacy more reliably.\", \"The statement 'Empirical evaluation on a 50% sparse LLaMA-2 7B model demonstrates the superiority of our approach, achieving lossless compression' in abstract is misleading and not supported by the results.\"], \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors focus on the sparsification and PEFT recovery of LLMs. The authors identify and address the Sparse Weight Gradient Interference (SWGI) phenomenon, where gradients from masked weights interfere with the fine-tuning of active parameters, leading to performance degradation. The authors conduct a theoretical analysis on this problem, and propose a new new iterative sparse fine-tuning scheme to handle this problem. Experiments on benchmarks show that the proposed method recovers the accuracy better than LoRA.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors point out the Sparse Weight Gradient Interference (SWGI) phenomenon, attributing to the mismatch between the sparse structure of the model weights and the dense structure of the PEFT weights.\", \"The authors provide a theoretical analysis of the SWGI phenomenon and related bounds on the errors introduced by sparsification.\"], \"weaknesses\": [\"The structure and readability of the paper could be improved. Also, a main figure illustrating the problem setting or the SWGI phenomenon, and the proposed method could be helpful.\", \"A detailed ablation study on the many components that comprise the method (CKA-guided sparsity setting, PEFT parameter allocation, sparsity scheduling) is missing.\", \"More experiments on well-accepted benchmarks such as MMLU or GSM8K are needed to verify the effectiveness of the proposed method.\"], \"questions\": [\"How does the proposed method perform for structured or semi-structured sparsity?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"A common method to prune iteratively is to learn the mask and also update the model weights to recover the loss induced by pruning using fine-tuning. The mask is applied such that the reconstruction loss of each layer after applying the mask is minimized. Rather than regular fine-tuning, this method employs LoRA to heal the model. They posit that using LoRA to heal the model introduces errors owing to the \\\"Sparse Weight Gradient Inference\\\" problem. This occurs because the LoRA module is not aware of where the masks have been applied to the frozen pre-trained model. Thus, it could have gradients for parameters where the frozen pre-trained model is pruned and set to 0 resulting in interference. They estimate the error that using LoRA incurs.\\n\\nThe proposed method uses bilevel optimization where in the upper level the mask is learnt and in the lower level the weights are updated using a modified LoRA. The modified LoRA applies a pooling operation on the input to reduce it to a lower dimension g, followed by multiplying the resulting value with weight G and then projects the output back to the original dimension. The weights G are learnt in this PEFT variant. To further alleviate the error introduced by using LoRA to heal is to rein in the magnitude of weight change by using normalization such as weight-decay or drop out etc. Rather than introducing the parameters uniformly to all layers for fine-tuning, more parameters are allocated to layers with higher reconstruction loss. The layerwise sparsity rate is set based on the Hilbert-Schmidt Independence Criterion metric and is inversely proportional to it, thereby pruning layers with higher information redundancy. This whole process is repeated for several iterations and the layer-wise sparsity rate, the sparsity mask and model weights are updated at each iteration.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Allocating additional fine-tuning parameters to the layers with higher reconstruction loss is novel.\\n2. They demonstrate that the zero-shot performance of the model after pruning is the best when compared to the other baselines on various tasks.\", \"weaknesses\": \"1. While the sparsity is induced by applying masks, unstructured pruning does not reduce the latency. In real world applications, reducing the latency is also crucial. Could you report the latency improvements of the final model and how it compares to just using N:M sparsity and WANDA etc?\", \"questions\": \"1. Wanda does not require updating the model weights during pruning unlike the proposed method and SparseGPT. So if Wanda is used as a baseline here, is it also fine-tuned using the same amount of data as this method for fair comparison?\\n2. It is not clear in line 4 of algorithm 1 why the sparsity mask is updated using either SparseGPT or WANDA\\n3. Given that this method uses PEFT and does not have to update all the model parameters like SparseGPT, can you report the amount of time and memory required to run your method and compare it against other baselines too? This might bolster your case further.\\n4. Could you provide some insights on why is your method so much better than SparseGPT given that the latter updates the weights of the entire model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper addresses the challenge of efficient fine-tuning for sparse large language models (LLMs) by introducing a novel theoretical framework. The authors identify and analyze the issue of Sparse Weight Gradient Interference (SWGI), which complicates the optimization of model weights during the fine-tuning process. This work aims to fill a critical gap in existing research and proposes a new iterative fine-tuning scheme to enhance model performance without substantial resource requirements.\\n\\nThe reviews provided a generally negative assessment, highlighting weaknesses in the paper's clarity, empirical validation, and lack of comprehensive comparisons with existing methods. Reviewers expressed differing views on the theoretical soundness and contributions, but all agreed that the empirical results were unsatisfactory. Notably, the reviewers pointed to the absence of an ablation study, insufficient experiments across diverse models, and unclear presentation of results.\\n\\nSince the authors abandoned their rebuttal opportunities, many issues were left unresolved, leading all reviews to conclude that the paper fails to meet the acceptance criteria. Therefore, I recommend a rejection of this submission.\", \"additional_comments_on_reviewer_discussion\": \"Since the authors abandoned their rebuttal opportunities, many issues were left unresolved, leading all reviews to conclude that the paper fails to meet the acceptance criteria. Therefore, I recommend a rejection of this submission.\"}", "{\"summary\": \"This paper introduce a comprehensive theoretical framework for memory-efficient fine-tuning for sparse LLMs. The authors identify and analyze a key challenge called \\\"Sparse Weight Gradient Interference,\\\" where masked pre-trained weights and PEFT weights exhibit competing optimization objectives during fine-tuning. To address this, they propose a novel method combining three key innovations: a pooling-based PEFT method, normalization of PEFT modules, and an adaptive layer-wise approach using Centered Kernel Alignment for sparsity allocation. Their theoretical analysis identifies three crucial factors affecting fine-tuning efficacy: errors from weight norms, PEFT structures, and error accumulation during fine-tuning. The effectiveness of their approach is demonstrated through extensive experiments on LLaMA-2 models, showing superior performance compared to existing methods, particularly in maintaining model performance under high sparsity conditions (up to 70%), while providing theoretical guarantees for error bounds.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. This paper have a strong theoretical foundation with mathematical proofs about the error bounds of using dense adapter for sparse LLMs. It emphasize on the gradient interference, and mitigate this error for a better PEFT method for sparse models.\\n\\n2. Novel identification and analysis of the gradient interference problem. This also results in a novel PEFT method.\\n\\n3. Experimental results show its advantage in comparison to LoRA.\\n\\n4. This paper is well-written.\", \"weaknesses\": \"1. Lack empirical upper bound: The paper lacks a crucial empirical upper bound comparison. Since the core problem stems from using dense adapter weights with a sparse model, an ideal oracle baseline would be directly fine-tuning only the sparse positions in the model. Including this oracle baseline would provide a clearer understanding of the maximum achievable performance without gradient interference. This comparison would also serve as a valuable reference point for future research in sparse fine-tuning methods.\\n\\n2. Limited model diversity in experiments: This work only evaluate LLaMA-2 family model. I will recommend the authors to further evaluate LLaMA-3 family and Mistral family for a more comprehensive comparison.\\n\\n3. Lack practical efficiency: While the proposed method show better performance, the authors do not provide any efficiency results to show the practical training speed of the proposed method. The proposed method cannot be useful if it is slow even with strong theoretical foundation. Therefore, I think the authors should provide the number of trainable parameters, the training time, the training memory for LoRA and the proposed method.\\n\\n4. Typo in Table 2: I don't see any other \\\"LoSA\\\" in this paper. The authors should either delete it or add an definition for it.\\n\\n5. Lack of ablation studies: This paper do not do ablation study for the Iterative Sparse Fine-Tuning Scheme. This should be added to verify its effectiveness.\\n\\nI will consider to raise my score if more evidence are provided about the effectiveness of the proposed method.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
52x04chyQs
On the Completeness of Invariant Geometric Deep Learning Models
[ "Zian Li", "Xiyuan Wang", "Shijia Kang", "Muhan Zhang" ]
Invariant models, one important class of geometric deep learning models, are capable of generating meaningful geometric representations by leveraging informative geometric features in point clouds. These models are characterized by their simplicity, good experimental results and computational efficiency. However, their theoretical expressive power still remains unclear, restricting a deeper understanding of the potential of such models. In this work, we concentrate on characterizing the theoretical expressiveness of a wide range of invariant models under *fully-connected* conditions. We first rigorously characterize the expressiveness of the most classic invariant model, message-passing neural networks incorporating distance (DisGNN), restricting its unidentifiable cases to be only highly symmetric point clouds. We then prove that GeoNGNN, the geometric counterpart of one of the simplest subgraph graph neural networks, can effectively break these corner cases' symmetry and thus achieve E(3)-completeness. By leveraging GeoNGNN as a theoretical tool, we further prove that: 1) most subgraph GNNs developed in traditional graph learning can be seamlessly extended to geometric scenarios with E(3)-completeness; 2) DimeNet, GemNet and SphereNet, three well-established invariant models, are also all capable of achieving E(3)-completeness. Our theoretical results fill the gap in the expressive power of invariant models, contributing to a rigorous and comprehensive understanding of their capabilities.
[ "geometric deep learning", "invariant models", "completeness", "expressiveness", "graph neural network", "subgraph graph neural network" ]
Accept (Poster)
https://openreview.net/pdf?id=52x04chyQs
https://openreview.net/forum?id=52x04chyQs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zT7yFJkNDv", "ynsXmEA7nZ", "yXmWRRjJki", "tivcui3DVB", "r2BE7jsimD", "kdKr36Zer1", "cJjzb6ESFA", "YAosNnDVs2", "V4VidEFjUD", "Qdacyb3YuJ", "JT111Psumj", "Hn2byfGq4m", "Gu2qsBiNpb", "GXqeAz8exV", "FaDIoBAsNn", "BgfybNc660", "90Vi1eHtc1", "8pdrh5i12Z", "8Go5CwUlwF", "4gLpWNWAk8", "1qUrLpC7cz", "0JTfn42hHw" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732478646247, 1732540024280, 1737523856027, 1730550544737, 1732209709096, 1732457606204, 1732952427284, 1732207749125, 1734716274501, 1732460722647, 1730523059991, 1732209744393, 1732539471576, 1732208693848, 1732658512216, 1732209040861, 1732490455475, 1730702485214, 1732209766793, 1730699594894, 1732729385067, 1732208299840 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_PLLb" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_y3SR" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_JK57" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Area_Chair_22pe" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_PLLb" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_jQxs" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_jQxs" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_jQxs" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Reviewer_JK57" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ], [ "ICLR.cc/2025/Conference/Submission7688/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply\", \"comment\": \"I thank the authors for providing detailed responses to my questions and provide important insights. However, I think the fundamental weaknesses are not addressed.\\n\\nI rank this paper as a borderline paper and lean to reject it in its current form. Also, I do not feel it is unacceptable if this paper is accepted.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to review our paper! Your comments have been invaluable in guiding us to improve our work, and we truly thank you for your effort and engagement.\\n\\nWe deeply appreciate your insights and acknowledge that some concerns may still require further discussion. We sincerely respect your perspective and **remain open to constructive dialogue to address any unresolved issues**. We are looking forward to the opportunity for continued discussion!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The authors of the paper prove that certain families of models are not only\\ninvariant with respect to the Euclidian group and permutation group, but also\\nthat classes of models distinguish the orbits of $\\\\mathbb{R}^3$ under the\\naction of $E(3)$. An extended analysis of the expressivity of DisGNN is\\nprovided and it is shown that this network architecture is nearly $E(3)$\\ncomplete. As a last contribution an analysis is provided for various families\\nof neural networks and conditions are provided under which they are\\n$E(3)$-complete. The theoretical results are verified by experiments\\non the QM9 dataset and a synthetic dataset with designed edge cases.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors have provided both extensive proofs as well as extensive analysis\\nto their claims. Overall the presentation and intend is clear and definitions\\nare well-thought out and the authors provide a good heuristic insight with each\\nintroduced theorem and definition which is nice. The extensive analysis of both\\nDisGNN and GeoNGNN shows that the work is of good quality and looks to be of\\ngood quality to the reviewer. All theorems come with extensive proofs and with\\na intuition which is helpful for the non-mathmatical audience. The quality of\\nthe content, such as originality and potential impact, is harder to asses since\\nthe reviewer is not familiar expressivity research.\", \"weaknesses\": \"To the reader it seems that some of the definitions are somewhat convolved and\\nsome simplification and clarity in the definitions might improve reading. Some\\nof the definitions, while they might be customary in the machine learning\\nliterature, are somewhat unfortunately choses from a mathematical perspective.\\nCompleteness of a space in the mathematical sense implies that each Cauchy\\nsequence has a limit within that space. A second example is the use of the term\\nisomorphism. While not wrong, a better phrasing is to say that the two point\\ncloud lie in the same orbit with respect to the action of the Euclidian group\\nacting on the tensor product of copies of $\\\\mathbb{R}^3$. The current phrasing\\nmight be better if is more in line with terminology used in machine learning.\", \"questions\": [\"How does this method for expressivity generalize to different types of architectures? To the reviewer it seems that this method for showing is very specific and would be difficult to generalize to other types of equivariant architectures acting on point clouds.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response [1/3]\", \"comment\": \"Thank you for taking the time to provide a very detailed and constructive review of our submission! We sincerely appreciate your thoughtful feedback and valuable suggestions, which have helped us identify areas for improvement and refinement in our work.\\n\\nBelow, we address your comments and questions individually, incorporating clarifications and additional insights where relevant.\\n\\n---\\n\\n> Q1: The excessive use of bold text and the absence of a clear outline make the paper\\u2019s contributions difficult to discern. Could the authors consider restructuring the introduction for better clarity?\\n\\n**Response:**\\n\\nThank you for your valuable feedback! In the revised version, we have reduced the excessive use of bold text to enhance readability. We are committed to further restructure the introduction to provide a clearer outline of the paper\\u2019s contributions in the furture.\\n\\n> Q2: I find that the statements in the section **Theoretical Characterization vs Practical Use** rely on the example C.2. How increasing the sparsity beyond this simple example is understudied despite the authors strong claims that relaxing the fully-connected condition leads to better expressiveness of GeoNGNN compared to DisGNN.\\n\\n**Response:**\\n\\nThank you for pointing out the reliance on Example C.2 in the discussion of sparsity and expressiveness! Here are further clarifications:\\n\\n1. **Broader Examples Supporting Sparse GeoNGNN\\u2019s Expressiveness**\\n\\nBeyond Example C.2, one can easily check that sparse GeoNGNN can distinguish all symmetric point clouds in [1]\\u2019s Appendix A that DisGNN cannot. We give some more illustrations:\\n\\n- **6-node pair in Appendix A.1 of [1]:** Sparse GeoNGNN embeds isosceles triangles with different base lengths into node features, distinguishing the two graphs, even when subgraphs cover only the two nearest neighbors.\\n- **The second 10-node pair in Appendix A.2 of [1]:** Sparse GeoNGNN captures regular pentagons in one graph and partial rings in another, again with a two-nearest-neighbor subgraph radius.\\n\\n2. **General Rule of Sparse GeoNGNN\\u2019s Advancements**\", \"these_examples_illustrates_a_key_property_of_geongnn_in_sparse_settings\": \"it can capture local *subgraph* patterns (e.g., triangle structures in Example C.2) in its inner GNN and embed these into node features for the outer GNN. In contrast, DisGNN only captures *subtree* structures, which leads to confusion in symmetric cases (e.g., distinguishing triangles from rings in Example C.2). This aligns with broader findings in graph learning, showing the superiority of *subgraph* patterns over *tree* patterns.\\n\\n> Q3: There is no supporting evidence for GeoNGNN over existing architectures in the primary paper. Additionally, there is no comparative analysis involving node feature information generated by a complete invariant function. Could the authors address this gap?\\n\\n**Response:**\\n\\nWe greatly appreciate your observation about the need for comparative evidence. We provide further clarifications:\\n\\n1. **Extensive Experimental Evidence for GeoNGNN in Appendix D**\\n\\nThe experiments in the main paper are primarily designed to support our theoretical conclusions. However, to provide a broader perspective and deepen the experimental analysis, we have included real-world evaluations in Appendix D. In these evaluations, GeoNGNN demonstrates competitive performance, surpassing models such as DimeNet, GemNet, and PaiNN (on MD17), as well as MACE, Equiformer (on MD22), and ComENet and SphereNet (on QM9) across multiple targets and on average.\\n\\n2. **GeoNGNN as a Promising Model**\\n\\nThough not achieving SOTA universally, GeoNGNN demonstrates potential as a simple nested extension of DisGNN. We *deliberately keep its design minimal* to focus on theoretical insights, but it can be further tuned for enhanced performance (see lines 1017\\u20131055).\\n\\n3. **Node Features via Complete Invariant Functions**\\n\\nThe paper emphasizes global expressiveness. However, as shown in our proof (lines 1816\\u20131817), complete methods generate *powerful node features* capable of solely reconstructing entire geometries, thus ensuring global-level completeness. Node-level *geometric* expressiveness is an intricate topic, which, however, now lacks a commonly adopted formalization like E(3)-completeness in relevant field, thus we defer to future work.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I thank the authors for their detailed response and clarifications. I have raised my score accordingly.\"}", "{\"title\": \"Looking forward to Your Feedback as the Discussion Deadline Nears\", \"comment\": \"Dear Reviewer y3SR,\\n\\nWe greatly appreciate your thoughtful review of our paper and your recognition of its theoretical contributions. Your feedback regarding the presentation weaknesses has been invaluable in helping us refine our work.\\n\\nIn response to your comments, we have addressed your concerns by clarifying the terminology choices and elaborating on the extension of our framework to equivariant models. Additionally, during the rebuttal process, we made **revisions to enhance the paper\\u2019s clarity and rigor**, which we hope could **further address your concerns regarding the \\u201cpresentation.\\u201d** These revisions include:\\n\\n1. More precise statements regarding the fully-connected conditions in both the abstract and introduction.\\n\\n2. Expanded introduction of GeoNGNN in the main text to improve accessibility and understanding.\\n\\n3. Details about the original NGNN and counterexamples in the Appendix to ensure self-containment and support for readers.\\n\\nWe sincerely hope that our responses and updates provide deeper insights into our work and effectively resolve the issues you highlighted. We look forward to further discussions and would be grateful if you might consider **raising your score if you find our improvements satisfactory**.\"}", "{\"title\": \"Author Response [1/2]\", \"comment\": \"We sincerely thank you for taking the time to provide detailed and constructive feedback on our submission. Your comments are highly valuable and have significantly helped us identify areas for improvement in our work. Below, we address your suggestions and concerns.\\n\\n> I recommend the authors integrate key aspects of NGNN ... into the main text. Additionally, including a comparison with the original DisGNN would be helpful\\u2014highlighting the differences and explaining what enables NGNN (intuitively) to overcome the limitations of DisGNN.\\n> \\n\\n**Response:**\\n\\nThank you for this insightful suggestion! We appreciate your recommendation to include key aspects of NGNN in the main text for enhanced clarity. \\n\\n1. **Further elaboration of NGNN**\\n\\nWhile we recognize the value of integrating NGNN\\u2019s core equations or an architectural diagram directly into the main text, the constraints imposed by the extensive technical results and page limits make this challenging. \\n\\nTo address this, we have expanded the relevant discussion in the Appendix, highlighting these additions in **green** in the revised version. We hope this approach strikes a balance between clarity and adherence to formatting constraints.\\n\\n1. **Intuitive explanation of GeoNGNN\\u2019s advancements over DisGNN**\\n\\nIn the main text, we have provided an intuitive explanation for how GeoNGNN resolves previously unidentifiable cases in DisGNN. Specifically, GeoNGNN leverages symmetry-breaking through node marking (see lines 272\\u2013278 and 302\\u2013308). Since all unidentifiable cases are symmetric (Theorem 4.2), this symmetry-breaking approach suffices to address all these corner cases effectively.\\n\\nThat said, we understand that the dense technical details may still pose challenges to accessibility. We sincerely value your feedback and are committed to further refining our explanations to enhance the paper\\u2019s clarity and make it more approachable for a broader audience\\n\\n> Several studies [1], [2], [3], [4] have explored scenarios where the graph is not fully connected, underscoring the need to evaluate the performance of invariant neural networks in sparse graph settings.\\n> \\n\\n**Response:**\\n\\nThank you for raising this important point! We acknowledge that our work primarily focuses on fully connected graphs and that this limitation is discussed in Sections 5.4 and 7 of the paper. Regarding the references you provided, we discuss them as follows:\\n\\n1. **ComENet [1] also establish theoretical conclusions assuming strongly connected 3D graphs.**\\n \\n We respectfully note that while ComENet [1] establishes important theoretical results, these conclusions are still derived under the assumption of strongly connected graphs, as outlined in Section 3.1 of [1]. Additionally, it does not address cases involving multiple nearest neighbors.\\n \\n2. ***Invariant* methods we study cannot trivially extend to sparse settings compared to previous *equivariant* methods [2, 3, 4].**\", \"we_would_illustrate_this_through_an_example\": \"Consider a point cloud with two clusters connected by a single edge. Such sparsity is common, where the graph has only local connectivity but remains overall connected. In these cases, invariant methods struggle to detect the relevant orientations of the two clusters (e.g., by swinging the edge connecting the two clusters), highlighting a fundamental limitation of invariant models in sparse graphs. We extensively discuss this challenge in lines 406\\u2013408, alongside relevant works.\\n \\n We appreciate your suggestion to consider sparse graph settings in future work and will strive to address this in subsequent research endeavors.\\n \\n\\n> The authors should aim to demonstrate the significance of their approach by clarifying in which specific cases their method outperforms existing methods. Providing examples or scenarios where GeoNGNN has a clear advantage would strengthen the empirical contributions.\\n> \\n\\n**Response:**\\n\\nThank you for this excellent suggestion! We appreciate the opportunity to clarify the empirical contributions of GeoNGNN and would like to direct your attention to Appendix D, where we provide extensive experimental evidence. Below, we highlight some key points:\\n\\n1. **GeoNGNN demonstrates strong performance, surpassing advanced geometric models:**\\n\\nDespite its straightforward design, GeoNGNN achieves very promising results. For instance: It surpasses DimeNet, GemNet, and PaiNN on MD17; It outperforms MACE and Equiformer on MD22; It exceeds ComENet and SphereNet on QM9. These results underscore GeoNGNN\\u2019s capability to deliver competitive performance across diverse benchmarks.\"}", "{\"metareview\": \"This paper analyzes the theoretical expressive power of invariant geometric deep learning models, focusing on their ability to represent point cloud geometry under fully-connected conditions. It establishes that DisGNN (message-passing neural networks incorporating distance) is limited only by highly symmetric point clouds and demonstrates that GeoNGNN ( the geometric counterpart of one of the simplest subgraph graph neural networks) achieves E(3)-completeness, resolving these cases. The work further shows that other models like DimeNet, GemNet, and SphereNet can also achieve E(3)-completeness. These results enhance the understanding of invariant models' capabilities, bridging the gap between theoretical expressiveness and practical applications.\\n\\nOne of the main criticism, shared by reviewers, was targeted toward the clarity of exposition of this work. Authors provided revisions of their text, that was in general approved and acknowledged by reviewers. The scores are consistently given as borderline accept. After reading the paper, I believe that the contributions are above the threshold for a publication at ICLR, and I recommend an accept. Nevertheless, I strongly encourage authors to thoroughly take into account the reviewers remarks regarding the final version of their paper.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers raised their scores to 6 after the revision provided by authors.\"}", "{\"title\": \"Author Response\", \"comment\": \"We are deeply grateful for your thoughtful engagement and are delighted that our responses addressed your concerns. Your updated score and support mean a great deal to us.\\n\\nPlease don\\u2019t hesitate to reach out if you have any further questions or if there are any remaining concerns\\u2014we would be glad to continue the discussion.\"}", "{\"summary\": \"This paper studies the expressiveness power of message-passing neural networks incorporating pairwise distance between graph nodes, showing the near E(3)-completeness. Furthermore, the authors study the subgraph graph neural networks, which can achieve E(3)-completeness. Therefore, it is possible to make DimeNet, GemNet, and SphereNet to achieve E(3)-completeness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper symmetrically studies the problem of E(3)-completeness geometric graph neural networks.\", \"weaknesses\": \"The work is based on global connectivity assumption, and this assumption significantly limits this work. Also, the experimental results seem to be quite weak.\", \"questions\": \"I have two questions.\\n\\n1. Do you have any insight on achieving E(3)-completeness for frame-based approaches?\\n\\n2. Can you comment on achieving E(3)-completeness by using node features beyond pairwise distances, e.g., dihedral angles?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response [2/3]\", \"comment\": \"> Q4: A significant portion of the QM9 dataset consists of non-symmetric structures. What are the proportions of indistinguishable data restricted to the subset of QM9 that includes only symmetric structures?\\n\\n**Response:**\\n\\nWe appreciate this question, yet we respectfully believe there may be some misunderstandings. We offer the following clarification:\\n\\n**1. Focus on symmetric structures, NOT indistinguishable cases**\\n\\nTheorem 4.2 provides a necessary (but not sufficient) condition for unidentifiable cases, establishing that all unidentifiable cases must be *symmetric*. Consequently, by assessing the proportion of symmetric structures in QM9, we effectively evaluate a superset of the unidentifiable cases. This approach supports our claim that DisGNN is near-complete.\\n\\nEvaluating \\u201cthe proportion of indistinguishable data within the symmetric subset of QM9\\u201d is impossible due to the lack of sufficient and necessary condition of indistinguishable cases, and also not necessary for the claimed theoretical conclusion.\\n\\n> Q5: In the QM9 noise study, the significant reduction in non-distinguishable point clouds occurs near what appears to be the level of reported error in the QM9 dataset. Given the reported error of 0.1\\u00c5, how is this error rescaled based on the applied scaling coefficient?\\n\\nWe appreciate the reviewer\\u2019s insightful observation. However, we clarify that the \\u201cDeviation Error ($\\\\epsilon$)\\u201d reported in Figure 2 is **dimensionless** --- As described in lines 927\\u2013928, we first rescale all point clouds to fit within a unit sphere. This ensures that the Deviation Error is standardized across all point clouds, independent of their original scale. Consequently, the result is not directly tied to the reported error $0.1 \\\\, \\\\text{\\u00c5}$ of the QM9 dataset itself.\\n\\n> Q6: Distinguishing structures on QM9, which lacks conformers, does not seem to be as important as datasets which contain conformers or very nearly isomorphic point clouds.The most compelling analysis appears to come from the study of MD17 but with mixed results. GeoNGNN appears to do particularly well on Benzene which is highly symmetric. How does Benzene behave under the noise tolerance study?\\n\\n**Response:**\\n\\nThank you for this insightful observation! We provide the following clarifications:\\n\\n1. **Why we evaluate DisGNN\\u2019s distinguishing ability on QM9?**\\n\\nThe experiments on QM9 are designed to support our conclusion that \\u201cDisGNN is near-complete on real-world datasets.\\u201d QM9, being an almost exhaustive enumeration of small molecules, serves as a representative dataset in chemical field reflecting natural small molecules' distributions.\\n\\nWe also respectfully believe there may be misunderstandings for our experiment purpose regarding \\u201cnearly isomorphic point clouds\\u201d (as demonstrated by the reviewer in the weakness part), and further response in our response to your Question 8.\\n\\n2. **DisGNN\\u2019s Behavior on Benzene Conformations**\\n\\nIt is interesting to find that DisGNN struggles with structures like Benzene, likely due to its high symmetry, affecting DisGNN\\u2019s learning behavior in this local manifold area. To investigate further, we evaluated the symmetry proportions of Benzene conformations in MD17, finding approximately 0.043% ($\\\\epsilon =0.1$, $r=2$, 267 symmetric structures out of 627983 conformations) with C-symmetry. On contrast, molecules like aspirin contain 0% C-symmetric structures. These results may indicate that complete models like GeoNGNN can perform particularly well on such symmetric structures where DisGNN may falter.\\n\\n> Q7: Typically, ModelNet40 is sampled to avoid handling large point clouds. It is unclear from the text whether the entire mesh or a sampled version is used. If sampled uniformly, there is no guarantee that the symmetries are preserved. Could the authors clarify this in the text?\\n\\n**Response:**\\n\\nWe thank the reviewer for this critical point, which we miss in the paper and will demonstrate in detail in the revised versions. \\n\\nTo clarify, **we use farthest point sampling (FPS) to downsample ModelNet40 from 1024 points to 256 points to mitigate noise.** Farthest point sampling is chosen as it effectively preserves the structural integrity of the objects while reducing noise that could be introduced by uniform sampling. This ensures that the symmetries of the original shapes are maintained.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you so much for your detailed and constructive feedback! We sincerely appreciate the time and effort you have taken to thoroughly engage with our work and provide us with such thoughtful suggestions for improvement.\\n\\nWe have carefully considered your concerns and have made preliminary revisions to the paper accordingly. These revisions are marked in $\\\\textcolor{red}{\\\\textbf{red}}$ for clarity. Specifically, we have:\\n\\n+ **Clarified Theoretical Claims**: Addressed the potentially misleading theoretical claims in the abstract, introduction, and conclusion by explicitly stating the fully connected graph condition. We also emphasized the significance of the problem in the introduction, highlighted the differences from prior work, and directed readers to the relevant sections for further details.\\n\\n+ **Enhanced GeoNGNN Explanation**: Added a brief intuitive description of GeoNGNN and its relationship with DisGNN in the main text. Additionally, we included a much more detailed formalization of the original NGNN in Appendix G.2 for completeness and self-containment.\\n\\nDue to page limitations, some of these revisions are located in the appendix, and we acknowledge that further refinements may still be needed. We remain committed to improving both the **rigor** and **readability** of the paper, particularly for a general audience, by revisiting and rephrasing the statements more broadly where possible. \\n\\nPlease do not hesitate to reach out if the revisions do not fully address your concerns. We are always eager to refine and enhance the paper further based on your valuable input. Thank you again for your guidance and support!\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely thank you for taking the time to review our work and for providing thoughtful feedback and constructive suggestions! Below, we address your comments in detail.\\n\\n---\\n\\n> Some of the definitions, while they might be customary in the machine learning literature, are somewhat unfortunately choses from a mathematical perspective. Completeness of a space in the mathematical sense implies that each Cauchy sequence has a limit within that space\\n> \\n\\n**Response:**\\n\\nWe greatly appreciate your expert remarks on the use of terms like *completeness* and *isomorphism!* \\n\\nTo clarify, we closely adhere to conventions established in prior works in this area ([1-3]). In these contexts, *(in)completeness* typically refers to the (in)ability of geometric models to distinguish point clouds, differing from its mathematical sense related to Cauchy sequences. Similarly, the term *isomorphism* aligns with recent literature ([4]), where the aim has been to bridge the expressiveness research in geometric deep learning with that of traditional graph learning, particularly in relation to the graph isomorphism problem.\\n\\nThat said, we greatly value the reviewer\\u2019s perspective on these terms and will carefully consider modifying the terminology if necessary to avoid ambiguity and ensure clarity from both mathematical and machine learning perspectives.\\n\\n> How does this method for expressivity generalize to different types of architectures? To the reviewer it seems that this method for showing is very specific and would be difficult to generalize to other types of equivariant architectures acting on point clouds.\\n> \\n\\n**Response:**\\n\\nThank you for raising this excellent question. Our approach to establishing expressiveness is primarily focused on invariant models working with distance graphs, as supported by the reconstruction proofs (see Appendix H.2 and H.3). However, we offer the following thoughts on generalizing this framework:\\n\\n1. **Generalizing to invariant models using other invariant features such as angles:**\\n \\n Models such as DimeNet that incorporate higher-order geometric features like angles can still be analyzed within our framework *by transforming these features into distances*. For instance, a function using $(\\\\theta_{AC}, \\\\theta_{AB}, d_{AC})$ as input can be equivalently expressed in terms of $(d_{AC}, d_{AB}, d_{BC})$. This equivalence allows us to evaluate whether the provided geometric information is sufficient for completeness. Indeed, we demonstrate this in Appendix H.5, where we prove the completeness of models like DimeNet, SphereNet, and GemNet under our framework.\\n \\n2. **Generalizing to equivariant models:**\\n \\n Equivariant models typically learn higher-order geometry using vector or tensor products. These operations often implicitly capture invariant features, for example, by runtime geometry calculation methods in [5]. For a more simple example, the vector product $\\\\vec{r}\\\\_{ij} \\\\cdot \\\\vec{r}\\\\_{ik}$ captures angular information, such as the angle $\\\\angle jki$. A promising direction for extending our proof framework to equivariant models would involve demonstrating that these operations can *extract the necessary invariant features required for geometric reconstruction in our proof*, thus satisfying the completeness criteria established in our work.\\n \\n\\n---\\n\\nWe hope this response provides clarity and addresses your concerns. Once again, we are very grateful for your thoughtful questions and constructive feedback, which have inspired us to further refine and expand upon our work. We are looking forward to further discussion, and would greatly appreciate it if you could **raise your score** when you feel your concerns are addressed.\\n\\n[1] Is Distance Matrix Enough for Geometric Deep Learning? NIPS 2023\\n\\n[2] Complete Neural Networks for Complete Euclidean Graphs. AAAI 2023\\n\\n[3] Incompleteness of graph neural networks for points clouds in three dimensions. \\n\\n[4] On the Expressive Power of Geometric Graph Neural Networks.\\n\\n[5] ViSNet: an equivariant geometry-enhanced graph neural\\nnetwork with vector-scalar interactive message passing for\\nmolecules\"}", "{\"title\": \"Response to Authors\", \"comment\": \"The revisions reformulating the theoretical claims have significantly clarified the contributions of this paper, resolving my initial concerns. As a result, I have raised my rating to Borderline Accept.\\n\\nHowever, I believe there is considerable room for improvement in the paper\\u2019s flow, particularly regarding the presentation of GeoNGNN. While the authors direct readers to the appendix for details about the architecture, it is unconventional to relegate the explicit description of a new proposed theoretical framework to the appendix. This approach disrupts the flow of the main text and makes it harder for readers to fully grasp the core ideas without constant back-and-forth references. I strongly recommend integrating a concise yet complete explanation of GeoNGNN within the main body of the paper to enhance its coherence and accessibility.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you very much for your thoughtful comments and questions! Below, we address each of your points in detail, and we hope our responses provide clarity and insight.\\n\\n---\\n\\n> The work is based on global connectivity assumption, and this assumption significantly limits this work.\\n> \\n\\n**Response:**\\n\\nWe acknowledge your concern regarding the Fully-Connected Condition, and we have explicitly and extensively discussed this limitation in Section 5.4. Here, we would like to further emphasize the *reasonableness* of this assumption in the context of our study:\\n\\n1. **Global connectivity is a standard assumption in expressiveness research for *invariant* models.**\\n \\n As discussed in Section 5.4, plenty of previous works in the literature adopt this assumption, given the challenges faced by invariant models in preserving local patterns during message passing. Unlike equivariant methods, which can rely on equivariant features to retain local information even in sparse graphs, invariant methods inherently lose such patterns. \\n \\n\\nAdditionally, we respectfully refer you to our response to Reviewer **jQxs** (for question \\u201cSeveral studies [1], [2], [3], [4] have explored\\u2026.\\u201d), where we further justify the reasonableness of this assumption for invariant methods.\\n\\n> Also, the experimental results seem to be quite weak.\\n> \\n\\n**Response:**\\n\\nWe appreciate your concern on the experimental results! Actually, experiments presented in the main text are primarily intended to support the theoretical claims of the paper (e.g., the near-completeness of DisGNN and the completeness of other methods). For additional empirical evidence, we would like to respectfully draw your attention to the **extensive real-world experiments on GeoNGNN provided in Appendix D.**\\n\\nWe demonstrate that GeoNGNN achieves promising performance despite its simple design. For instance, it surpasses advanced methods such as DimeNet, GemNet, and PaiNN on MD17; MACE and Equiformer on MD22; and ComENet and SphereNet on QM9.\\n\\nFurthermore, we outline potential directions for improving GeoNGNN to further enhance its empirical performance. While we acknowledge that there is room for improvement, we view these experiments as a foundational step for future work, given the theoretical emphasis of the current study.\\n\\n> Do you have any insight on achieving E(3)-completeness for frame-based approaches?\\n> \\n\\n**Response:**\\n\\nThank you for this insightful question! Some frame-based methods (e.g., [1], [2]) indeed face challenges with symmetric structures where global-level frames may degenerate.\\n\\nA potential way forward is **breaking the overall symmetry through node marking, as demonstrated by GeoNGNN.** By marking a node, frames can be calculated on the symmetry-broken point cloud (e.g., using PCA while ignoring the marked node). To ensure permutation invariance, the results can be averaged across different marked nodes. We believe this strategy could be a promising direction for extending frame-based approaches toward E(3)-completeness.\\n\\n> Can you comment on achieving E(3)-completeness by using node features beyond pairwise distances, e.g., dihedral angles?\\n> \\n\\n**Response:**\\n\\nWe greatly appreciate the reviewer\\u2019s comment on this topic. Actually, complete models we establish like DimeNet and GemNet provide good examples of incorporating angles and dihedral angles, and their completeness is formally established in **Theorem 5.5.** Below, we briefly highlight some key points.\\n\\n1. **Achieving completeness using *angle* information.**\\n \\n To prove this, we show that models like DimeNet can effectively *implement* GeoNGNN. For example, with angle information $\\\\theta_{aij}$ and distances $(d_{ai}, d_{ij})$, the remaining distance $d_{aj}$ can be calculated. This allows us to express DimeNet equivalently using only distance information, aligning it with our proof framework. Details are provided in Appendix H.5.1.\\n \\n2. **Dihedral angles as redundant for completeness.**\\n \\n While angle information alone is sufficient for achieving theoretical completeness when properly integrated like DimeNet, dihedral angles (as used in GemNet) are redundant in fully connected scenarios, as noted in **lines 398\\u2013399.** However, these features may enhance generalization in practical applications, as demonstrated in empirical results.\\n \\n---\\n\\nWe hope these responses could address your concerns and provide clarity on the limitations, insights, and future directions of our work. Thank you again for your thoughtful feedback and the opportunity to further explain our contributions, and we would greatly appreciate it if you could **raise your score** accordingly.\\n\\n[1] FAENet: Frame Averaging Equivariant GNN for Materials Modeling\\n\\n[2] Frame Averaging for Invariant and Equivariant Network Design\"}", "{\"title\": \"Further Questions\", \"comment\": \"I appreciate the authors' effort in addressing some of my concerns. However, there are still key areas where further clarification is necessary to ensure a clear and coherent understanding of the paper\\u2019s contributions and framework.\\n\\n**Primary Contribution and Scope:**\\nMy main concern relates to reframing the primary contribution as \\u201cthe proposal of GeoNGNN and the proof that this approach effectively resolves the limitation of DisGNN in identifying symmetric point clouds when the graphs are fully connected.\\u201d \\nNotice that while the results in this work successfully demonstrate the completeness of SphereNet in fully connected graph settings, the paper also implicitly acknowledges that this does not extend to scenarios involving non-fully connected graphs. It is well-known that distinguishing cis- and trans-isomers in sparse graphs often requires additional information, such as torsion angles, to glue local features into a global understanding\\u2014something not addressed under the current setup.\\n\\nIn my opinion, the claim in the introduction is overly broad. The assertion that \\u201cDimeNet, SphereNet, and GemNet are Complete\\u201d is valid only under the assumption of fully connected graphs. While it is acceptable for the theoretical results to be restricted to fully connected graphs, the introduction and stated contributions should be revised to explicitly reflect this limitation. As it stands, the way the theoretical results are framed in the introduction risks being misleading or overly generalized, which could detract from the paper\\u2019s otherwise significant contributions. I recommend that the authors rephrase their claims and clarify the scope of their contributions to prevent potential misunderstandings.\\n\\n**Architecture of GeoNGNN:**\\nMy concern regarding including GeoNGNN's architecture in the main text stems from its central role as the backbone framework that addresses DisGNN's limitations. Without a clear presentation of this architecture in the main text, I worry that the paper\\u2019s flow and ability to convey its key contributions effectively may suffer, especially for the general audience. Providing an accessible and concise explanation of GeoNGNN in the main text would enhance the paper\\u2019s clarity and impact.\\n\\n**Empirical Evidence:** \\nWhile the empirical evidence provided is not sufficiently robust, I acknowledge the novelty of this paper and its proposed frameworks. I understand that the paper's focus is on the ability to distinguish non-isomorphic point clouds rather than on developing architectures that achieve state-of-the-art results. I would like to reconsider the marginal improvement on several benchmarks as not necessarily a weakness but focus on rephrasing the conveying of theoretical contributions.\\n\\nOverall, I believe this is a solid paper with valuable contributions. However, I strongly recommend that the authors address my concerns, especially the one regarding Primary Contribution and Scope. The introduction should explicitly state the assumption of fully connected graphs and emphasize that even under this condition, the question being studied remains unresolved and presents significant challenges for DisGNN. This would underscore the importance of the authors\\u2019 work and align the framing with its true contributions. If these concerns are adequately addressed, I would likely raise my score accordingly.\"}", "{\"summary\": \"This paper explores the geometric completeness of a significant class of geometric deep learning models: invariant neural networks. These networks leverage invariant features to impose strong inductive biases on spatial information, yet their theoretical expressive power remains somewhat unclear. This study aims to bridge that gap, enhancing both our theoretical understanding and practical application of these models.\\n\\nThe authors first demonstrate that incorporating distance into message-passing neural networks (like DisGNN) allows for the identification of asymmetric point clouds but struggles with highly symmetric ones. They then investigate geometric extensions of subgraph-based GNNs and prove that these models, specifically GeoNGNN, can successfully distinguish symmetric point clouds, achieving E(3)-completeness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": [\"The paper attempts to address a crucial problem that enhances our understanding of the potential of invariant neural networks and can guide future model design.\", \"Investigating the geometric counterparts of subgraph GNNs is a novel contribution.\", \"The results extend beyond specific cases, such as asymmetric point clouds, broadening our understanding of how these models perform on symmetric point clouds as well.\"], \"weaknesses\": \"- The paper lacks clarity and structure in some areas. The detailed explanation of NGNN, which serves as the backbone of their main contribution, the GeoNGNN framework, is left in the appendix. I recommend the authors integrate key aspects of NGNN, such as its core equations or an architectural diagram, into the main text. Additionally, including a comparison with the original DisGNN would be helpful\\u2014highlighting the differences and explaining what enables NGNN (intuitively) to overcome the limitations of DisGNN. This would give readers a clearer understanding of how the proposed approach builds on previous work and addresses specific challenges without disrupting the flow. Instead, a brief overview of NGNN, along with key formulas and a comparison with GNN, could benefit the flow of the whole paper.\\n\\n- The results are primarily constrained to cases with global connectivity, which is often impractical in real-world applications due to the significant computational costs. Several studies [1], [2], [3], [4] have explored scenarios where the graph is not fully connected, underscoring the need to evaluate the performance of invariant neural networks in sparse graph settings. In practice, invariant neural networks tend to perform worse than equivariant ones in these cases. While the authors have left these cases in the future direction, it would greatly strengthen the paper if they could extend their analysis to sparse graphs or at least discuss how their completeness results may vary with different levels of graph sparsity. Providing theoretical bounds on performance degradation as connectivity decreases would also be valuable.\\n\\n- The experimental results, while showing some improvement, are relatively marginal, which limits the empirical impact of the work. I suspect this might be due to the sparsity of the graphs used in practical applications. The authors should aim to demonstrate the significance of their approach by clarifying in which specific cases their method outperforms existing methods. Providing examples or scenarios where GeoNGNN has a clear advantage would strengthen the empirical contributions.\\n\\n\\n[1] Wang, L., Liu, Y., Lin, Y., Liu, H., & Ji, S. (2022). ComENet: Towards complete and efficient message passing for 3D molecular graphs. Advances in Neural Information Processing Systems, 35, 650-664.\\n[2] Joshi, C. K., Bodnar, C., Mathis, S. V., Cohen, T., & Lio, P. (2023, July). On the expressive power of geometric graph neural networks. In International Conference on Machine Learning (pp. 15330-15355). PMLR.\\n[3] Wang, S. H., Hsu, Y. C., Baker, J., Bertozzi, A. L., Xin, J., & Wang, B. (2024). Rethinking the benefits of steerable features in 3D equivariant graph neural networks. In The Twelfth International Conference on Learning Representations, ICLR\\n[4] Sverdlov, Y., & Dym, N. (2024). On the Expressive Power of Sparse Geometric MPNNs.\", \"questions\": \"Please address the concerns I raised in the weaknesses section. Additionally, I recommend revising the introduction to better reflect the paper's contributions. In my view, the primary contribution is the proposal of the geometric counterpart of NGNN and the proof that this approach effectively resolves the limitation of DisGNN in identifying **symmetric point clouds when the graphs are even fully connected**.\\nThe authors should also consider relevant experiments in this direction to emphasize the novelty and significance of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response [3/3]\", \"comment\": \"> Q8: The selection of ModelNet40 does not seem to rigorously test the theoretical claims of the paper, which focus on nearly isomorphic point clouds. Could the authors provide more rigorous testing on datasets that better align with their theoretical focus?\\n\\n**Response:**\\n\\nWe thank the reviewer for their insightful observation and for raising this important point. Below, we clarify how our evaluation aligns with the theoretical focus of the paper:\\n\\n1. **Theoretical claims of DisGNN focus on \\u201cNear E(3)-completeness.\\u201d**\\n\\n Datasets like QM9 and ModelNet40 are selected as real-world examples sampled from natural distributions, providing practical demonstrations of our theoretical claims. The relevance of these datasets has been clarified in the discussion, particularly in **\\u201cWhy we evaluate DisGNN\\u2026on QM9.\\u201d**\\n\\n2. **Corner cases theoretically unidentifiable by DisGNN are not \\u201cNearly isomorphic point clouds.\\u201d**\\n\\n While \\u201cNearly isomorphic point clouds\\u201d can result in similar graph embeddings (e.g., two molecular conformations differing by a small disturbance in one atom\\u2019s position), in theoretical view, they are still distinguishable by DisGNN since precision is assumed large enough. Theoretical works such as [1-3] (including ours), as well as a broader range of studies in traditional graph learning, focus on inherently indistinguishable point cloud (or graph) pairs\\u2014those that result in *identical* embeddings even under models with *infinite* precision. These cases represent the intrinsic limitations of the model, rather than practical challenges related to noise or perturbations.\\n\\n Consequently, **experiments that align with our theoretical focus should evaluate the distinguishability of models on such challenging pairs**. We address this explicitly in Section 6.2, where our experiments demonstrate the model\\u2019s limitations in such cases while showcasing the capabilities of complete models in handling them.\\n\\n> Q9: It is unclear from the text and appendix what each structure in the synthetic dataset represents, how these structures were constructed, and why they are significant. Could the authors provide more detailed explanations on the construction and relevance of these synthetic structures?\\n\\n**Response:**\\n\\nWe thank the reviewer for their valuable suggestion. We have now included detailed explanations about these synthetic structures, including their construction and significance, in the appendix, highlighted with **green** text.\\n\\n---\\n\\nThank you once again for your detailed and thoughtful feedback. We recognize that some of the questions raised may have arisen from misunderstandings due to shortcomings in our initial presentation. We have clarified them in our responses and are continuing to improve them in the next revision.\\n\\nWe sincerely hope that our responses resolve your concerns and offer deeper insights into our work. We are looking forward to further discussion with you. If there are no further technical concerns, we would be delighted if you could consider **raising your score** accordingly.\\n\\n[1] Is Distance Matrix Enough for Geometric Deep Learning?\\n\\n[2] Complete Neural Networks for Complete Euclidean Graphs. \\n\\n[3] Incompleteness of graph neural networks for points clouds in three dimensions\"}", "{\"summary\": \"The paper offers the following contributions:\\n1) Introduces and defines the notion of \\\"Identify\\\" for invariant GNNs, positioned between distinguishability and completeness.\\n2) Provides a characterization for the incompleteness of DisGNN\\n3) Proposes GeoNGNN to ensure indentification of the cases where DisGNN is incomplete\\n4) Demonstrates that several established invariant GNNs are capable of completeness\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The paper introduces a novel conceptual framework for understanding the efficacy of certain invariant architectures. This is further supported through theoretical analysis and empirical studies. Additionally, it proposes a framework for the development of future architectures extending the impact and significance of the work.\", \"weaknesses\": \"The paper lacks sufficient empirical evidence to support its theoretical analysis, significantly reducing the overall significance and impact of the work. The selected real world experiments emphasize datasets which lack conformers or nearly isomorphic point clouds. Furthermore, the main text does not provide adequate evidence to demonstrate the advantages of GeoNGNN over the existing complete invariant architectures.\\n\\nAdditionally, the excessive use of bold text and the absence of a clear outline in the introduction make it challenging to follow and clearly understand the contributions of the paper.\", \"questions\": \"1) The excessive use of bold text and the absence of a clear outline make the paper\\u2019s contributions difficult to discern. Could the authors consider restructuring the introduction for better clarity?\\n\\n2) I find that the statements in the section **Theoretical Characterization vs Practical Use** rely on the example C.2. How increasing the sparsity beyond this simple example is understudied despite the authors strong claims that relaxing the fully-connected condition leads to better expressiveness of GeoNGNN compared to DisGNN.\\n\\n3) There is no supporting evidence for GeoNGNN over existing architectures in the primary paper. Additionally, there is no comparative analysis involving node feature information generated by a complete invariant function. Could the authors address this gap?\\n\\n4) A significant portion of the QM9 dataset consists of non-symmetric structures. What are the proportions of indistinguishable data restricted to the subset of QM9 that includes only symmetric structures?\\n\\n5) In the QM9 noise study, the significant reduction in non-distinguishable point clouds occurs near what appears to be the level of reported error in the QM9 dataset. Given the reported error of 0.1\\u00c5, how is this error rescaled based on the applied scaling coefficient? \\n\\n6) Distinguishing structures on QM9, which lacks conformers, does not seem to be as important as datasets which contain conformers or very nearly isomorphic point clouds.The most compelling analysis appears to come from the study of MD17 but with mixed results. GeoNGNN appears to do particularly well on Benzene which is highly symmetric. How does Benzene behave under the noise tolerance study?\\n\\n7) Typically, ModelNet40 is sampled to avoid handling large point clouds. It is unclear from the text whether the entire mesh or a sampled version is used. If sampled uniformly, there is no guarantee that the symmetries are preserved. Could the authors clarify this in the text?\\n\\n8) The selection of ModelNet40 does not seem to rigorously test the theoretical claims of the paper, which focus on nearly isomorphic point clouds. Could the authors provide more rigorous testing on datasets that better align with their theoretical focus?\\n\\n9) It is unclear from the text and appendix what each structure in the synthetic dataset represents, how these structures were constructed, and why they are significant. Could the authors provide more detailed explanations on the construction and relevance of these synthetic structures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Again, we deeply appreciate your continued engagement and valuable feedback on improving our paper! We deeply acknowledge your efforts, and we remain committed to refining and updating the manuscript.\\n\\nWe recognize that the presentation of GeoNGNN may have caused some confusion, particularly for broader audiences who are less familiar with subgraph GNNs. To address this, we have prepared a preliminary revision that **provides both an intuitive and formal explanation of GeoNGNN in the main text**. Additionally, we have **highlighted its key advantages over DisGNN**, as discussed with you (e.g., subgraph versus subtree). These modifications are highlighted with **$\\\\textcolor{orange}{yellow}$ in Section 5.1**. We believe these updates will help align readers\\u2019 understanding with our intended message.\\n\\nWe hope this revision addresses your concerns to some extent. We are eager to engage in further discussions with you to continue improving the paper. Once again, thank you so much for your valuable guidance and support!\"}", "{\"title\": \"Author Response [2/2]\", \"comment\": \"2. **GeoNGNN is a promising foundation despite not being fully optimized for SOTA:**\\n\\nWhile GeoNGNN does not consistently achieve SOTA results, we believe it represents a highly promising approach. Its current design is *deliberately simple*, serving as a nested version of the basic invariant model, DisGNN, for our theoretical purpose. As discussed in lines 1017\\u20131055, its architecture has significant potential for refinement and extension to enhance performance further. However, given the theoretical focus of this work, we prioritized simplicity to underscore the model\\u2019s completeness rather than optimizing it for empirical SOTA performance.\\n\\n> In my view, the primary contribution is the proposal of the geometric counterpart of NGNN and the proof that this approach effectively resolves the limitation of DisGNN in identifying\\u00a0**symmetric point clouds when the graphs are even fully connected**. The authors should also consider relevant experiments in this direction to emphasize the novelty and significance of this work.\\n> \\n\\n**Response:**\\n\\nWe are grateful for your thoughtful observation regarding the primary contribution of our work. \\n\\n1. **Main focus of our work**\\n\\nWe would like to clarify that the central focus of our study is to rigorously **establish the expressiveness of a broad class of models**, including DisGNN and complete models, thereby advancing the theoretical understanding of their capabilities.\\n\\nWhile GeoNGNN is introduced as a pivotal proof-of-concept, it is not the exclusive focus of our work. Instead, as the simplest complete model, GeoNGNN provides a foundational framework for demonstrating the completeness of other models. Our experiments are designed to align with this theoretical emphasis rather than highlighting the novelty of GeoNGNN alone.\\n\\n2. **Existing Evidence Supporting the Effectiveness of Complete Models**\\n\\nIn Section 6.2, we assess all established complete models on tasks specifically designed to distinguish pairs of point clouds that DisGNN cannot differentiate. The results consistently demonstrate that complete models effectively address these limitations, providing strong evidence of their ability to resolve symmetric structures.\\n\\n---\\n\\nIn conclusion, we are deeply thankful for your thoughtful and constructive feedback. Your insights have greatly helped us improve the presentation and focus of our work. We hope our clarification address your concerns and we would greatly appreciate it if you could **raise your score** accordingly.\"}" ] }
52XG8eexal
State-space models can learn in-context by gradient descent
[ "Neeraj Mohan Sushma", "Yudou Tian", "Harshvardhan Mestha", "Nicolò Colombo", "David Kappel", "Anand Subramoney" ]
Deep state-space models (Deep SSMs) have shown capabilities for in-context learning on autoregressive tasks, similar to transformers. However, the architectural requirements and mechanisms enabling this in recurrent networks remain unclear. This study demonstrates that state-space model architectures can perform gradient-based learning and use it for in-context learning. We prove that a single structured state-space model layer, augmented with local self-attention, can reproduce the outputs of an implicit linear model with least squares loss after one step of gradient descent. Our key insight is that the diagonal linear recurrent layer can act as a gradient accumulator, which can be `applied' to the parameters of the implicit regression model. We validate our construction by training randomly initialized augmented SSMs on simple linear regression tasks. The empirically optimized parameters match the theoretical ones, obtained analytically from the implicit model construction. Extensions to multi-step linear and non-linear regression yield consistent results. The constructed SSM encompasses features of modern deep state-space models, with the potential for scalable training and effectiveness even in general tasks. The theoretical construction elucidates the role of local self-attention and multiplicative interactions in recurrent architectures as the key ingredients for enabling the expressive power typical of foundation models.
[ "state-space models", "in-context learning", "linear recurrent networks", "mesa-learning" ]
Reject
https://openreview.net/pdf?id=52XG8eexal
https://openreview.net/forum?id=52XG8eexal
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tCdA4pzAxy", "sbOdtDO2zh", "rf3fErUIsL", "rJdJx9V7fb", "meg4z50DJb", "i6hfxLvq13", "hpN61KmXjz", "bUsFIb22RB", "Wp85vFQXwM", "VrS3isEMXJ", "QD8EPtDHZ3", "ODePlSsTNt", "G5XngtQp18", "Axcc8vr9AR", "8zwD4pHqNr", "3lk0djgtSV", "3hW38qIn9g" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732375755775, 1730244428820, 1729159528660, 1730970522726, 1732373863687, 1732563058965, 1732375642819, 1735193656363, 1732373467681, 1732653391847, 1732375797662, 1729161981399, 1732671899145, 1732374409571, 1737524063877, 1732697965312, 1732373418538 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_aDbU" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_e4sc" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_pFrz" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_aDbU" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Submission10588/Area_Chair_DZNL" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_pFrz" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_AvcW" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_e4sc" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10588/Reviewer_AvcW" ], [ "ICLR.cc/2025/Conference/Submission10588/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> 1. The paper addresses a theoretical statement about the transformer model, specifically that it can learn in-context through gradient descent. However, this concept is already well-known and documented. In theoretical research, it is widely recognized that transformers, convolutional models, and recurrent models, such as state-space models, are universal approximators and are capable of learning continuous target relationships. Therefore, demonstrating the same for state-space models does not appear to offer significant theoretical advancement. If the authors wish to underscore the importance of this work, I would recommend showing that previous work in approximation theory does not extend to the in-context learning case. Without this distinction, the contribution seems to fall within a subset of known results that hold limited value for theoretical study.\\n\\nThe reviewer mentions that the ability of transformers and other universal models to perform ICL is well-documented, and thus demonstrating the same for SSMs might not constitute a significant theoretical advancement. We respectfully clarify that our work provides a mechanistic explanation specifically tied to gradient descent dynamics in SSMs, which distinguishes it from prior general approximation results. While it is true that many architectures can approximate target functions, we focus on elucidating how SSMs can explicitly emulate gradient-based learning mechanisms. Our goal is not to provide a general approximation result but a concrete and direct construction to show that ICL can be implement using GD in SSMs.\\n\\n> 2. The notion of in-context learning, as presented, lacks practical interest. Simply stating that a model \\\"can learn\\\" through in-context learning is insufficient, as the same argument could be made for various methods, including Newton's or quasi-Newton's methods. There is no compelling reason for practitioners to assume that, when state-space models engage in in-context learning, the behavior in terms of loss convergence or asymptotic rates would align with that of gradient descent. Clarifying this distinction would strengthen the paper\\u2019s contribution. Could you provide empirical comparisons of convergence rates or asymptotic behavior between your method and alternatives like Newton's or quasi-Newton's methods.\\n\\nWe are not aware of any specific direct construction showing that SSMs in the general form commonly used can learn using Newton\\u2019s or quasi-Newton\\u2019s methods. We do in fact demonstrate in Figs 3 & 4 that SSMs trained from a random initialisation do exhibit convergence to the same values as would GD. We plan to look into the similar empirical comparisons for Newton\\u2019s and quasi-Newton\\u2019s methods in the future work.\\n\\n> 3. The paper introduces a state-space model with local self-attention, which is not a commonly adopted approach in practice. It would be more beneficial to align the framework with models that are widely used in real-world applications. A method akin to the linear transformer might be more appropriate and could provide a better point of reference for practical utility.\\n\\nLocal self attention is more commonly referred to as sliding window attention in literature (we have updated our paper to reflect this). It is in fact used in major high-performance models such as Griffin [1] and Samba [2].\\n\\n[1] S. De et al., \\u2018Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models\\u2019, Feb. 29, 2024, arXiv:2402.19427. doi: 10.48550/arXiv.2402.19427.\\n\\n[2] L. Ren, Y. Liu, Y. Lu, Y. Shen, C. Liang, and W. Chen, \\u2018Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling\\u2019, Jun. 11, 2024, arXiv:2406.07522. doi: 10.48550/arXiv.2406.07522.\"}", "{\"summary\": \"This paper investigates how state-space models (SSMs) can perform in-context learning through gradient descent. The authors provide both theoretical and empirical evidence that SSMs augmented with local self-attention can emulate gradient descent on implicit regression models. Their key insight is that the diagonal linear recurrent layer in SSMs can act as a gradient accumulator.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"To my knowledge, the insight viewing SSMs as a gradient accumulator allowing them to emulate gradient descent on in-context learning tasks is novel and the combination with local self-attention for preprocessing is interesting\", \"The mathematical theory appears to be sound\", \"The presentation of the material and step by step walk through of the theory from simple cases to more complex is clear and helpful\", \"The theoretical findings potentially point to a mechanistic understanding of architectural requirements (for SSMs or other models) that enable types of in-context learning\"], \"weaknesses\": [\"While the paper does a good job of explaining the theory and formulation of GD-SSM and empirically validating GD-SSM on regression tasks, I didn't feel like it shed that much insight into the currently used SSM variants used in practice.\", \"Note that line 488 says: \\\"These findings not only explain the success of recent SSM variants in in-context learning tasks but\", \"also provide valuable insights for the design of future sequence models.\\\"\", \"Note that Lines 052-053 say, referring to the modern SSM variants : \\\"Which features of these successful models contribute to in-context learning, as opposed to earlier variants? Using a constructive approach, we pinpoint input-dependent input and output processing, as the key features required for in-context learning\\\".\", \"But I do not think the current version of the paper supports either of the claim that this paper sheds light on these questions\", \"The approach constructed seems to be very different from those used in practice. In addition, the methods commonly used in practice do not appear to do well on the regression ICL tasks in this paper. So it is unclear what I should take away from this related to prior SSM methods?\", \"I think the empirical results were an opportunity to provide insight here, but didn't seem to fully achieve this. Please see my questions below which may clarify this for me.\", \"Related to the above, the experimental section is light on architectural details, making it hard to determine what exactly is being compared empirically and what conclusions can be drawn. Please include more experimental details including the architectures and hyperparameters.\", \"The paper is often unclear on the terminology of local self-attention and local linear self-attention. The formulation in Section 3.2 appears to only require local linear self-attention, yet other times in the paper local self-attention is used. In the related works section the two are contrasted. I would recommend being very explicit and consistent on this point as the two are very different.\", \"The paper is limited to regression style in-context learning. This is interesting and amenable to theoretical analysis, but also limits the impact of the investigation. See https://arxiv.org/abs/2401.12973 and https://arxiv.org/abs/2402.04248 for other papers that have empirically investigated other types of in-context learning in different architectures.\"], \"questions\": \"1. Can you explicitly define GD-SSM? Perhaps even provide pseudo-code? What are its learnable parameters? It is defined in 283 after the fact, but I think a more explicit definition would be helpful.\", \"questions_related_to_experiments_and_first_two_bullets_in_weaknesses_section_above\": \"2. How do the results in this paper explain the success of recent SSM variants (as claimed in lines 488)? Line 052-053 say : \\\"Which features of these successful models contribute to in-context learning, as opposed to earlier variants? Using a constructive approach, we pinpoint input-dependent input and output processing, as the key features required for in-context learning\\\". I hoped the paper would provide insight into this question. However, it seems that the constructed GD-SSM is quite different from the currently used SSM architectures (e.g. Griffin, Mamba), and performs much better empirically in the experiments. Meanwhile the currently used SSM architectures do not seem to perform regression in-context learning (at least for one layer). So how do the results in this paper answer (or point to answering) the first question in line 052 or support the claim in line 488? This is my main question regarding this paper. I list some additional questions below that are an attempt to clarify some of the presented empirical results below.\\n\\n3. Why does Griffin do so poorly compared to linear transformers and S5? Shouldn't Griffin be a mix of sliding window attention and SSM (the RG-LRU), making it similar to the GD-SSM formulation? Or is the model referred to as Griffin just RG-LRU?\\n\\n4. Why does the time-invariant S5 appear to consistently outperform the input-dependent Mamba and Griffin models (even though all fail to converge)? I would have expected the models with input-dependent dynamics to perform ICL better. Is this difference at all interesting?\\n\\n5. Does it benefit the other architectures (1 layer linear attention, S5, Griffin, Mamba) to also provide them with the local self-attention preprocessing? Shouldn't they then be able to potentially learn the GD-SSM model if provided this? If not, why not?\\n\\n6. Can 2 layers of the SSMs solve the task? Note that combining the local self attention with the diagonal SSM is a combination of 2 sequence processors.\", \"other_questions_and_comments\": \"7. Where is the 1-step GD in Figure A?\\n\\n8. In Figure 4B, the 1 or 2 layer GD-SSM formulation always seems to achieve lower loss than the corresponding number of steps GD model. Why is this? What happens if we compare more layers and more steps, e.g. 5 or 10 steps/layers? \\n\\n9. Note that the color scheme in Figure 4C etc is hard to read and tell which line corresponds to which model.\\n\\n10. Note that LSA is first introduced in equation 2, but only later defined in line 119.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper imitates Transformers learn in-context by gradient descent(https://arxiv.org/abs/2212.07677). This paper proves that a single structured state-space model layer with local self-attention can reproduce the outputs of an implicit linear model with least square loss. (The task considered is not general)\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper demonstrates that the state-space model with LSA can learn in-context learning tasks on linear and nonlinear regression problems. While I do not dispute the claim that state-space models can achieve in-context learning via gradient descent, my concern lies in whether the specific modification introduced warrants the detailed calculations provided. The conclusions, as presented, seem to offer limited insights into how this work might advance research on improving in-context learning capabilities. A clearer connection to meaningful improvements in this area would significantly enhance the paper's contribution.\\n2. The experiments compare the state-space model with Griffin and Linear Transformer, but they are restricted to shallow architectures of 1 or 2 layers. This setup is inconsistent with typical in-context learning scenarios, where models need to be sufficiently large for emergent phenomena and meaningful in-context learning capabilities to surface. The current experiments do not effectively capture these dynamics, making it difficult to observe such phenomena in shallow sequence models. Expanding the experiments to include deeper architectures would provide a more realistic assessment of in-context learning in state-space models.\", \"weaknesses\": \"1. The paper addresses a theoretical statement about the transformer model, specifically that it can learn in-context through gradient descent. However, this concept is already well-known and documented. In theoretical research, it is widely recognized that transformers, convolutional models, and recurrent models, such as state-space models, are universal approximators and are capable of learning continuous target relationships. Therefore, demonstrating the same for state-space models does not appear to offer significant theoretical advancement. If the authors wish to underscore the importance of this work, I would recommend showing that previous work in approximation theory does not extend to the in-context learning case. Without this distinction, the contribution seems to fall within a subset of known results that hold limited value for theoretical study.\\n2. The notion of in-context learning, as presented, lacks practical interest. Simply stating that a model \\\"can learn\\\" through in-context learning is insufficient, as the same argument could be made for various methods, including Newton's or quasi-Newton's methods. There is no compelling reason for practitioners to assume that, when state-space models engage in in-context learning, the behavior in terms of loss convergence or asymptotic rates would align with that of gradient descent. Clarifying this distinction would strengthen the paper\\u2019s contribution. Could you provide empirical comparisons of convergence rates or asymptotic behavior between your method and alternatives like Newton's or quasi-Newton's methods.\\n3. The paper introduces a state-space model with local self-attention, which is not a commonly adopted approach in practice. It would be more beneficial to align the framework with models that are widely used in real-world applications. A method akin to the linear transformer might be more appropriate and could provide a better point of reference for practical utility.\", \"questions\": \"1. See Weakness.\\n2. The behavior of GD and 1-D GD-SSM appears different. Could the authors provide an explanation for this discrepancy? Clarifying this would help the readers better understand the distinctions in learning dynamics between these approaches.\\n3. Figure 3: The font size in Figure 3 is too small and could be increased for readability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the ability of SSMs to perform ICL through gradient descent during the forward pass over recurrent steps. It provides a theoretical analysis of various ICL scenarios, demonstrating that simple SSMs, equipped with input and output-dependent processing, can accurately mimic gradient descent when data is presented as a sequence. This theory offers an explanation for the capacity of modern SSMs to execute ICL and outlines a network circuit capable of handling such tasks. Empirical experiments confirm that the proposed circuit can effectively learn and perform small-scale synthetic tasks in practice.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) Enhances understanding of inductive biases in SSMs for ICL tasks.\\n\\n(2) Bridges key domains such as SSMs, ICL, and mechanistic interpretability.\\n\\n(3) Demonstrates generality by extending results to multi-step cases, multiple layers, and multi-dimensional data.\\n\\n(4) Empirical analysis shows practical alignment with the theory-based construction in simple cases.\\n\\n(5) Clarity: While some aspects could be improved (such as adding a figure to illustrate the main ideas in Section 3.1), the paper is clear, well-motivated, and easy to follow. It starts with simple cases and provides clear definitions, making it an enjoyable read!\", \"weaknesses\": \"(1) The authors have **overlooked important related work in this domain**. For example, [1] presented at NeurIPS 2016, demonstrates that **RNNs can perform gradient descent during the forward pass**. Additionally, Section 3.1 in the current paper shares several similarities with Section 2 of [1]. To be clear, this is not an accusation of plagiarism, but rather an indication of missing key references. While there is a difference in scope (RNNs versus SSMs), this oversight reduces the originality of contribution #1. I kindly ask the authors to specify the principal differences between the approach taken by [1] and the approach used by SSMs in their work to better highlight the novel contributions.\\n\\n(2) **Overlooks simple alternative approaches:** While the theoretical analysis is accurate, the authors overlook significant alternative approaches. References [2-4] demonstrate the connection between S6 layers and attention, showing that S6 can be more expressive than attention without the softmax. Additionally, various ICL studies for transformers omit the softmax (see [5-6] as an example), allowing direct conclusions that could extend to SSMs. **Given the extensive exploration of ICL capabilities in transformers, discussions on the ICL potential of SSMs should consider this reduction approach**. I recommend that the authors evaluate whether this approach could yield additional theoretical developments to strengthen their analysis. Moreover, I suggest that the authors explicitly state the advantages of their approach compared to the proposed alternatives. For instance, Section 3 introduces specific circuits that cannot be achieved through simple reductions. \\n\\n(3) **Understanding ICL in SSM variants can be enhanced** by examining their sub-components. Previous research indicates that S6 layers exhibit significantly better ICL capabilities than earlier SSMs [7]. However, while the authors highlight input and output-dependent processing as crucial features, they do not empirically ablate these features across various ICL tasks, nor do they provide a detailed theoretical analysis to substantiate this claim explicitly. I recommend adding a subsection that explores these aspects in depth. It is also important to note that input- and output-dependent processing can be implemented through various gating mechanisms. Hence, this claim could be considered somewhat ambiguous, as gated state-space models have been previously studied without demonstrating the same level of ICL capabilities as models like Mamba / S6.\\n\\n\\n(4) **The claims regarding GD-SSM** (\\u201cOur construction, which we call GD-SSM, is not restricted to in-context learning tasks and performs well on general-purpose prediction problems\\u201d) do not hold, and much **more empirical analysis is required to justify** them.\\n\\n___\\n\\n[1] Learning to learn by gradient descent by gradient descent. Andrychowicz et al.\\n\\n[2] The Hidden Attention of Mamba Models. Ali et al.\\n\\n[3] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality. Dao et al.\\n\\n[4] Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks. Sieber et al.\\n\\n[5] Transformers Learn In-Context by Gradient Descent. Oswald et al. (see section 2)\\n\\n[6] Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers. Dai et al. (see section 3.1)\\n\\n[7] Can mamba learn how to learn? a comparative study on in-context learning tasks. Park et al.\", \"questions\": \"Please see weaknesses 1-3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"thank you for a constructive review\", \"comment\": \"We thank the reviewer for their constructive points, and appreciate the reviewer recognizing the novelty of our perspective on SSMs as gradient accumulators, the soundness of our mathematical theory, and the clarity of our step-by-step theoretical exposition.\", \"responses_to_weaknesses\": \"> While the paper does a good job of explaining the theory and formulation of GD-SSM and empirically validating GD-SSM on regression tasks, I didn't feel like it shed that much insight into the currently used SSM variants used in practice.\\n\\nWe agree that the constructed approach doesn\\u2019t exactly match existing methods, although the use of sliding window attention is becoming more common on highly-performant models [1,2] and 2-D state and input-dependent output computation is pretty standard through gating [3].\\nTo explore why Griffin and Mamba do so badly, we explored their performance a bit more (see Table 1 in appendix). We found that adding additional layers (our earlier experiments were with 1-layer), increasing the model dimension, and increasing the number of heads improves performance . We hypothesize that this is due to the fact that the architecture doesn\\u2019t exactly match GD, and additional layers are required to play the role of the sliding window attention to perform time-mixing (access adjacent elements in the sequence). \\n\\nOur construction distils out the exact inductive bias required for ICL.\\n\\n> Related to the above, the experimental section is light on architectural details, making it hard to determine what exactly is being compared empirically and what conclusions can be drawn. Please include more experimental details including the architectures and hyperparameters.\\n\\nWe\\u2019ve added more details in the Appendix about the details of the experiments and architectural details. Please let us know if any specific aspects still remain unclear.\\n\\n> The paper is often unclear on the terminology of local self-attention and local linear self-attention. The formulation in Section 3.2 appears to only require local linear self-attention, yet other times in the paper local self-attention is used. In the related works section the two are contrasted. I would recommend being very explicit and consistent on this point as the two are very different.\\n\\nWe agree with the reviewer, and have changed all references to \\u201cLocal Self-attention\\u201d into \\\"Sliding Window Attention\\u201d which is more explicit.\\n\\n> The paper is limited to regression style in-context learning. This is interesting and amenable to theoretical analysis, but also limits the impact of the investigation. See https://arxiv.org/abs/2401.12973 and https://arxiv.org/abs/2402.04248 for other papers that have empirically investigated other types of in-context learning in different architectures.\\n\\nWe agree with the reviewer, and plan to look at other ICL tasks in future work. \\n\\n[1] S. De et al., \\u2018Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models\\u2019, Feb. 29, 2024, arXiv:2402.19427. doi: 10.48550/arXiv.2402.19427.\\n\\n[2] L. Ren, Y. Liu, Y. Lu, Y. Shen, C. Liang, and W. Chen, \\u2018Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling\\u2019, Jun. 11, 2024, arXiv:2406.07522. doi: 10.48550/arXiv.2406.07522.\\n\\n[3] A. Gu and T. Dao, \\u2018Mamba: Linear-Time Sequence Modeling with Selective State Spaces\\u2019, Dec. 01, 2023, arXiv: arXiv:2312.00752. \\n\\n[4] T. Dao and A. Gu, \\u2018Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality\\u2019, in Proceedings of the 41st International Conference on Machine Learning, PMLR, Jul. 2024, pp. 10041\\u201310071. \\n\\n[5] J. Park et al., \\u2018Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks\\u2019, Feb. 06, 2024, arXiv: arXiv:2402.04248.\"}", "{\"comment\": \"I thank the authors' for their response. I will take it into account when discussing with the other reviewers during the reviewer discussion phase.\"}", "{\"comment\": \"We thank the reviewer for recognizing the soundness and clarity of the paper and the empirical evidence supporting the papers claims in small-scale settings.\\n\\nWe also thank the reviewer for bringing up a very closely related paper in Zucchet et al. 2023. However, there are critical differences between their paper and ours as we explain below. To summarize, ours is a more explicit, direct and parsimonious construction showing that SSMs can do ICL by GD while theirs is very indirect.\", \"in_detail\": \"In Zucchet et al., they show the equivalence of gated linear recurrent networks and linear self-attention. It is true that linear self-attention has been shown to be able to do ICL by GD [1], and so, indirectly, they show that gated linear recurrent networks can perform ICL by GD. In our case, we consider the general formulation of SSMs and show a correspondence directly between them and ICL via GD, leading to a simpler and more understandable construction.\", \"our_construction_is_more_parsimonious_and_efficient_due_to_the_following\": \"1. a model using linear attention model requires $3f^2$ input parameters for the separate embeddings for query, key and value, whereas in our case we only need $f^2$ input parameters since we have only one embedding.\\n2. To train the parameters of a linear-regression model with $f \\\\times f$ parameters the GD-SSM requires $f^2$ recurrent neurons with $f^2$ recurrent parameters since the recurrent matrix is diagonal. Whereas a linear self-attention as constructed in [1] requires $12f^2$ parameters in the linear self-attention.\\n3. The RNN construction in Zucchet et al. uses $O(f^4)$ neurons to represent the $3f^2$ parameters in the linear self-attention layer, further increasing the size and redundancy of the model.\\n\\nOur construction distils out the exact inductive bias required for ICL. Moreover, the that the simple form of linear transformers used in [1] still lag behind state-space models in task performance. \\n\\nWhile the end result is the same, the method used to achieve this is completely different \\u2014 ours is more direct.\\n\\nWe have added a discussion of this to our paper.\\n\\n[1] J. von Oswald et al., \\u2018Transformers Learn In-Context by Gradient Descent\\u2019, in Proceedings of the 40th International Conference on Machine Learning, PMLR, Jul. 2023, pp. 35151\\u201335174. Accessed: Apr. 16, 2024. [Online]. Available: https://proceedings.mlr.press/v202/von-oswald23a.html\"}", "{\"metareview\": \"The paper proposes that ssms can perform in-context learning through gradient descent during their forward pass. The authors try to provide theoretical and empirical evidence that SSMs augmented with local self-attention can emulate gradient descent on implicit regression models. Their key insight is that the diagonal linear recurrent layer in ssms can act as a gradient accumulator.\", \"positive\": [\"novel theoretical perspective on ssms as gradient accumulators for in-context learning\", \"sound mathematical theory\"], \"weaknesses\": [\"limited practical insights into currently used ssm variants (e.g., Griffin, Mamba)\", \"restricted to regression-style in-context learning tasks\", \"unclear connection between theoretical construction and empirical success of modern ssms\", \"missing related works.\", \"more empirical support is needed for the claims about GD-SSM's general applicability.\", \"The paper is borderline. While authors addressed some of reviewers' concerns, they remained skeptical overall. I vote for a rejection but encourage the authors to improve the paper based on reviewers' suggestions.\"], \"additional_comments_on_reviewer_discussion\": [\"The rebuttal period focused on several key points:\", \"the authors clarified their construction is more direct and parameter-efficient than indirect reduction-based approaches, though reviewer AvcW maintained this didn't provide sufficient new conceptual insights.\", \"the authors added empirical ablation studies and architectural details in response to concerns about connections to practical architectures. They showed that adding layers and increasing model dimension improved the performance of existing architectures.\", \"the authors acknowledged the need to tone down claims about GD-SSM's applicability to general prediction tasks.\", \"While the authors provided thorough responses, the fundamental concerns about novelty and practical relevance weren't fully addressed.\"]}", "{\"comment\": \"> (3) Understanding ICL in SSM variants can be enhanced by examining their sub-components. Previous research indicates that S6 layers exhibit significantly better ICL capabilities than earlier SSMs [7]. However, while the authors highlight input and output-dependent processing as crucial features, they do not empirically ablate these features across various ICL tasks, nor do they provide a detailed theoretical analysis to substantiate this claim explicitly. I recommend adding a subsection that explores these aspects in depth. It is also important to note that input- and output-dependent processing can be implemented through various gating mechanisms. Hence, this claim could be considered somewhat ambiguous, as gated state-space models have been previously studied without demonstrating the same level of ICL capabilities as models like Mamba / S6.\\n\\nOur theoretical analysis demonstrates a construction that does show that input-dependent processing both at the inputs and outputs is necessary to achieve ICL. We agree that these can be implemented by various gating mechanisms, but our construction suggests that they need to be of a specific form to achieve ICL. Specifically, the input-dependent input processing needs to be in the form of a linear sliding window attention. To our knowledge, previous gated state-space models do not consider this form. We have also added empirical ablation studies in the Appendix in the updated version of the paper. \\n\\n> (4) The claims regarding GD-SSM (\\u201cOur construction, which we call GD-SSM, is not restricted to in-context learning tasks and performs well on general-purpose prediction problems\\u201d) do not hold, and much more empirical analysis is required to justify them.\\n\\nWe acknowledge that the statement regarding GD-SSM\\u2019s applicability to general prediction tasks requires additional empirical evidence, and have updated this in the paper. We plan to look at other sequence modelling tasks including language modelling in our future work.\"}", "{\"comment\": \"Thank you for your clarifications and the added details.\\n\\nFirst, thank you for the clarification regarding W.1. I completely mixed up the references and apologize for the confusion. Your response fully addresses this issue.\\n\\nRegarding W.2 (direct vs. indirect approach), I agree that the proposed formulation is more parameter-efficient, direct, and simpler than the indirect approach. However, I believe these advantages should be clearly discussed (in depth) in the paper and demonstrated as significant through theoretical or empirical findings. Without this, in my view, the main contribution feels somewhat limited.\\n\\nAdditionally, concerns W.3 and W.4 remain unaddressed. Therefore, while I raise my score from 3 to 5, I still believe that this paper should be rejected.\"}", "{\"title\": \"responses to questions\", \"comment\": \"> 2. The behavior of GD and 1-D GD-SSM appears different. Could the authors provide an explanation for this discrepancy? Clarifying this would help the readers better understand the distinctions in learning dynamics between these approaches.\\n\\nFig 2A does not show the learning dynamics of GD but rather the final result obtained by one step of GD. We have updated the figure to make this clearer. The GD-SSM does have a learning curve because we start from a random initialisation of an SSM with the inductive biases we propose and training it on the ICL task for linear regression.\\n\\n> 3. Figure 3: The font size in Figure 3 is too small and could be increased for readability.\\n\\nThank you, we have fixed this now.\"}", "{\"summary\": \"In-context learning (ICL) is one of the surprising capabilities of LLMs at scale. Seminal results have shown that Transformer-based architectures have this ability and more recent ones confirmed that SSMs also do. This work studies shows that SSMs can implement ICL by gradient descent. They provide a constructive proof showing that 1 layer can implement one GD step and confirm empirically on toy tasks that the networks find this solution.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is overall well written and easy to follow. The theoretical part is sound and experiments convincingly demonstrate the paper's claims in toy settings.\", \"weaknesses\": \"I am concerned by the novelty of this paper. [Zucchet et al. 2023](https://arxiv.org/abs/2309.01775) show that 1 SSM layer can implement any linear self-attention layer. This results implies that any ICL algorithm LSA can implement, an SSM can. This holds for 1 GD step studied in this paper, but also for any other algorithm the community has been studying over the last few years. Additionally, this paper also have very similar experiments to the ones presented here.\", \"questions\": \"To the best of my understanding and as mentioned above, the results presented here are a subset of the results presented in Zucchet et al. 2023. Can the authors compare their work to that paper and highlight what their insights are?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for providing the clarifications and additional details. After reviewing them, I have decided to maintain my score. I look forward to future versions that provide a deeper understanding of the relationship between the dynamics of ICL and GD/Newton/Quasi-Newton methods.\"}", "{\"title\": \"responses to questions\", \"comment\": \"> 1. Can you explicitly define GD-SSM? Perhaps even provide pseudo-code? What are its learnable parameters? It is defined in 283 after the fact, but I think a more explicit definition would be helpful.\\n\\nWe have now mentioned the term GD-SSM earlier in 061 and added references to it close to l.285 near the equations that define it for the general case. We have also described which parameters of the model are learnable over there.\\n\\n> 2. How do the results in this paper explain the success of recent SSM variants (as claimed in lines 488)? Line 052-053 say : \\\"Which features of these successful models contribute to in-context learning, as opposed to earlier variants? Using a constructive approach, we pinpoint input-dependent input and output processing, as the key features required for in-context learning\\\". I hoped the paper would provide insight into this question. However, it seems that the constructed GD-SSM is quite different from the currently used SSM architectures (e.g. Griffin, Mamba), and performs much better empirically in the experiments. Meanwhile the currently used SSM architectures do not seem to perform regression in-context learning (at least for one layer). So how do the results in this paper answer (or point to answering) the first question in line 052 or support the claim in line 488? This is my main question regarding this paper. I list some additional questions below that are an attempt to clarify some of the presented empirical results below.\\n\\nPlease see the explanation we have provided above on this point in the weaknesses.\\n\\n> 3. Why does Griffin do so poorly compared to linear transformers and S5? Shouldn't Griffin be a mix of sliding window attention and SSM (the RG-LRU), making it similar to the GD-SSM formulation? Or is the model referred to as Griffin just RG-LRU?\\n\\nWhile Griffin does have sliding window attention, we hypothesize that it\\u2019s poor performance might be due to the specific form of recurrence it has which doesn\\u2019t match GD. Adding additional layers and heads seems to improve performance \\u2014 see new results added in Table 1 in Appendix.\\n\\n> 4. Why does the time-invariant S5 appear to consistently outperform the input-dependent Mamba and Griffin models (even though all fail to converge)? I would have expected the models with input-dependent dynamics to perform ICL better. Is this difference at all interesting?\\n\\nFrom our analysis, it doesn\\u2019t seem like input-dependent dynamics itself has a major role to play for ICL. Note that input-dependent recurrence would correspond to online GD rather than mini-batch GD, which is inherently noisier. It is certainly a very interesting aspect, and we plan to explore it further to understand how this construction compares to how existing models do ICL in future work.\\n\\n> 5. Does it benefit the other architectures (1 layer linear attention, S5, Griffin, Mamba) to also provide them with the local self-attention preprocessing? Shouldn't they then be able to potentially learn the GD-SSM model if provided this? If not, why not?\\n\\nWe do expect that adding SWA to existing architectures would enable them to learn GD-SSM as long as they also have multiplicative input-dependent output processing (through gating potentially) and 2-D state or equivalent.\\n\\n> 6. Can 2 layers of the SSMs solve the task? Note that combining the local self attention with the diagonal SSM is a combination of 2 sequence processors.\\n\\nYes, we expect so. Our current results already demonstrates that a 2-layer linear self-attention can solve the linear regression task \\u2014 See Fig. 4C and Fig. 6. We expect it would be similar for other SSMs (as long as the output processing is appropriate).\\n\\n> 7. Where is the 1-step GD in Figure A?\\n\\nThank you for pointing this out. The 1-step GD loss is the same as in Fig 2A. We\\u2019ve added it now to Fig 4A.\\n\\n> 8. In Figure 4B, the 1 or 2 layer GD-SSM formulation always seems to achieve lower loss than the corresponding number of steps GD model. Why is this? What happens if we compare more layers and more steps, e.g. 5 or 10 steps/layers?\\n\\nWe suspect that in the case of non-linear regression, GD-SSM learns a strategy that is presumably better than pure GD. But it\\u2019s not clear what this strategy might be.\\n\\n> 9. Note that the colour scheme in Figure 4C etc is hard to read and tell which line corresponds to which model.\\n> 10. Note that LSA is first introduced in equation 2, but only later defined in line 119.\\n\\nThank you for pointing these out. We have fixed it.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for clarifying the link with Zucchet et al. 2023 [1]. While I agree with the fact that your construction is more direct, as it directly tackles GD whereas [1] is a more general results, I don't think this does not bring any new conceptual insight nor that it is a completely different method. Additionally, the comparison with [1] is a bit misleading in your point 3 is misleading: [1] uses $O(f^2)$ neurons and not $O(f^4)$.\\n\\nFor this reason, I decide to keep my score as it is.\"}", "{\"comment\": \"We thank the reviewer for recognizing the strengths of the paper including its clarity, ability to bridge key domains and helping enhance the understanding of inductive biases for ICL tasks. We respond to listed weaknesses below:\\n\\n> (1) The authors have overlooked important related work in this domain. For example, [1] presented at NeurIPS 2016, demonstrates that RNNs can perform gradient descent during the forward pass. \\n\\nWe appreciate the reviewer bringing up [1] (Andrychowicz et al., NeurIPS 2016). However, we believe this work is completely unrelated to ours. Reference [1] employs meta-learning to train two levels of LSTMs: one LSTM generating weight updates which are used to update the \\u201clower level\\u201d LSTM (see Fig. 3 in [1]). This meta-learning framework fundamentally differs from our work, where we construct a state-space model (SSM) that implicitly performs gradient descent as part of its forward pass dynamics to perform in-context learning. We do not use meta-learning, and we do not use LSTMs producing updates for another LSTM. \\n\\n> Additionally, Section 3.1 in the current paper shares several similarities with Section 2 of [1]. To be clear, this is not an accusation of plagiarism, but rather an indication of missing key references. While there is a difference in scope (RNNs versus SSMs), this oversight reduces the originality of contribution #1. I kindly ask the authors to specify the principal differences between the approach taken by [1] and the approach used by SSMs in their work to better highlight the novel contributions. \\n\\nWe would appreciate a more detailed explanation of the reviewer\\u2019s concern on the similarity between Section 2 of [1] and our Section 3.1. To our understanding [1] does not consider linear regression at all, and in Section 2, formulates a meta-learning loss. Whereas, in our Section 3.1, we start from linear regression and construct an SSM that can perform in-context learning. Our approach provides a mechanistic explanation for how in-context learning can emerge in SSMs without requiring meta-learned updates or external optimization signals.\\n\\n> (2) Overlooks simple alternative approaches: While the theoretical analysis is accurate, the authors overlook significant alternative approaches. References [2-4] demonstrate the connection between S6 layers and attention, showing that S6 can be more expressive than attention without the softmax. Additionally, various ICL studies for transformers omit the softmax (see [5-6] as an example), allowing direct conclusions that could extend to SSMs. Given the extensive exploration of ICL capabilities in transformers, discussions on the ICL potential of SSMs should consider this reduction approach. I recommend that the authors evaluate whether this approach could yield additional theoretical developments to strengthen their analysis. Moreover, I suggest that the authors explicitly state the advantages of their approach compared to the proposed alternatives. For instance, Section 3 introduces specific circuits that cannot be achieved through simple reductions. \\n\\nWe recognize the relevance of studies [2-6] demonstrating connections between SSMs, attention mechanisms, and gradient descent. While those studies propose **indirect** reduction-based approaches, our GD-SSM construction **explicitly and directly** encodes gradient descent into the recurrent dynamics of SSMs, without relying on architectural simplifications. Therefore, we contend that our approach is simpler. \\n\\nWhile it is true that linear self-attention can be written as a recurrent network, our approach demonstrates a more direct connection between state-space models and ICL by GD. Note that the simple form of linear transformers used in [5] still lag behind state-space models in task performance. Moreover our construction is more parameter efficient than linear self-attention \\u2014 a model using linear attention model requires $3mf$ input parameters for the separate embeddings for query, key and value, whereas in our case we only need $mf$ input parameters since we have only one embedding.\\n\\nReferences [2-4] deals with selective state-space models with non-linear input-dependence of the SSM parameters. Our construction suggests that such non-linear Input dependence, while powerful, may not be necessary for in-context learning. The remaining studies are empirical, and don\\u2019t show a direct theoretical construction, but are nonetheless interesting to understand if the ICL mechanism in the wild corresponds to the one we propose.\\n\\nAs suggested, we will expand our discussion to explicitly outline the advantages of our approach over reduction-based methods, such as greater interpretability and architectural alignment with currently used SSM and clarify that the unique circuits introduced in Section 3 cannot be achieved through these reductions alone. This addition will provide a clearer context for how GD-SSM complements and extends prior work.\"}" ] }
52UtL8uA35
Deep Networks Learn Features From Local Discontinuities in the Label Function
[ "Prithaj Banerjee", "Harish Guruprasad Ramaswamy", "Mahesh Lorik Yadav", "Chandra Shekar Lakshminarayanan" ]
Deep neural networks outperform kernel machines on several datasets due to feature learning that happens during gradient descent training. In this paper, we analyze the mechanism through which feature learning happens and use a notion of features that corresponds to discontinuities in the true label function. We hypothesize that the core feature learning mechanism is label function discontinuities attracting model function discontinuities during training. To test this hypothesis, we perform experiments on classification data where the true label function is given by an oblique decision tree. This setup allows easy enumeration of label function discontinuities, while still remaining intractable for static kernel/linear methods. We then design/construct a novel deep architecture called a Deep Linearly Gated Network (DLGN), whose discontinuities in the input space can be easily enumerated. In this setup, we provide supporting evidence demonstrating the movement of model function discontinuities towards the label function discontinuities during training. The easy enumerability of discontinuities in the DLGN also enables greater mechanistic interpretability. We demonstrate this by extracting the parameters of a high-accuracy decision tree from the parameters of a DLGN. We also show that the DLGN is competitive with ReLU networks and other tree-learning algorithms on several real-world tabular datasets.
[ "Deep Learning", "Feature learning", "Interpretable", "Local Discontinuities", "Deep learning theory", "Deep neural architectures", "Supervised learning" ]
Accept (Poster)
https://openreview.net/pdf?id=52UtL8uA35
https://openreview.net/forum?id=52UtL8uA35
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wElabVg6A3", "tPOPvWHOex", "tG0OIN5dqO", "sRlscGCBs2", "oYncB19C14", "o4jIA8BPHq", "j1cEXxmrIa", "hT56EeVIcD", "fl6hpgWmLs", "f9XE04XWmP", "dD9BZLTikr", "afwgHyiWHe", "X0SWVfxyHq", "Uy67dlhwDA", "SGNcUS5i8X", "QWzwsog9UE", "OhBX7IKzCu", "GXb5u5TgSx", "AC2JYT3845", "6NjOj6oAF4" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1730642398948, 1730651372328, 1731670857202, 1732207676741, 1730167423870, 1734766002349, 1732228134486, 1731670310528, 1731776055487, 1732496660327, 1731839195839, 1731839842694, 1730585721954, 1731778965591, 1732713222520, 1732692988075, 1732693406115, 1737523443885, 1732560848785, 1731670699303 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_D8rp" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_tLyw" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_xQ4v" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_xQ4v" ], [ "ICLR.cc/2025/Conference/Submission1259/Area_Chair_b1k1" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_n82d" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_D8rp" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_n82d" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1259/Reviewer_tLyw" ], [ "ICLR.cc/2025/Conference/Submission1259/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduce a model called the Deep Linearly Gated Network (DLGN) to study feature learning, specifically in binary classification tasks defined by an oblique decision tree labeling function. They use DLGN to test the hypothesis that during training, the model\\u2019s discontinuities move towards label function's discontinuities. The paper includes evaluations on dozens of open tabular datasets to compare DLGN with ReLU networks and tree-learning algorithms.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. Clear writing. The authors study a specific class of problems with great clarity.\\n2. The authors' persistence in tackling a challenging yet manageable problem setting is commendable.\\n3. Like a black-box learner, the DLGN is able to learn non-linear features. Yet, it still provides mechanistic interpretability.\\n4. DLGN outperforms both tree-based and non-tree algorithms, as well as ReLU networks, in the oblique decision tree setting, while maintaining strong competitiveness on real-world tabular datasets.\\n5. The authors provide a framework that paves the way for future research and development.\", \"weaknesses\": \"I'm not seeing effective weaknesses.\", \"questions\": \"1. What's the purpose of defining the manifold $\\\\mathcal{M}$ in line 133?\\n2. In line 239, Equation (4), is there a missing transpose on $\\\\mathbf{u}_{i_1}^1$?\\n3. I find it difficult to understand how the computational cost of a forward pass for Equation (1) is less than twice that of a ReLU network with $mL$ nodes. Could the authors provide further clarification on this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel model architecture (deep linearly gated network, DLGN) and studies the way in which this model seeks out \\u201clabel discontinuities\\u201d in the data during training. This analysis is enabled the fact that we can enumerate such label discontinuities in a DLGN. The synthetic datasets with known discontinuities are generated by another model \\u2014 an oblique decision tree (ODT). The authors also show how an ODT can be constructed from a trained DLGN, for the purpose of interpretability. Finally, the paper presents results of fitting a DLGN to several UCI regression tasks, comparing performance to several tree-based, kernel-based, and NN-based baselines.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Originality. The DLGN is an interesting architecture that combines deep linear networks with the gating mechanism to construct a novel class of non-linear models. One could also treat DLGN as a novel decision tree parametrization which, when relaxed using a sigmoid, can be learned by back-propagation. To the best of my knowledge, DLGN is a novel, original model architecture, though its connection to soft decision trees should be studied more carefully.\"], \"weaknesses\": \"- Lack of focus. At the moment, the paper\\u2019s focus is split between a study of feature learning using DLGN (chapter 5), and a study of the DLGN itself as an expressive, yet interpretable model architecture (chapter 6). To me these are two orthogonal contributions, and the paper would be stronger if authors focused on one of these.\\n- Significance. The significance of the proposed model and the feature learning study presented in the paper is not clear to me. \\n 1. Capturing \\u201clabel discontinuities\\u201d (are these not simply decision boundaries?) is at the core of solving a classification problem, hence it is not surprising that a model which works well on the task has to discover such hyperplanes \\u2014 I don\\u2019t see an alternative way that a model can solve a task. The real question is how an over-parametrized model can correctly identify high-dimensional decision boundaries given limited data \\u2014 this is the main mystery in the theory of deep learning at the moment, and one that this paper doesn\\u2019t shed much light on.\\n 2. To understand the significance of the findings for deep learning, we would need to understand the relationship between a DLGN and a DNN. While it is nice (albeit, not surprising) to see how a DLGN uncovers the true \\u201clabel discontinuities\\u201d, how can we know that a DNN will demonstrate the same behavior?\\n 3. While the proposed architecture is appealing due to potentially being both expressive and interpretable, results on the real (UCI) datasets suggests that DLGN is comparable in performance to standard tree algorithms. There is little evidence that we should prefer the proposed architecture to e.g. the well-studied random forests, which we could also argue to be \\u201cinterpretable\\u201d. (I do not consider results on synthetic data to be good evidence, given the connection between ODTs used to generate the data and DLGN.)\\n 4. While the authors claim that the proposed architecture is interpretable, no interpretations of the models fit to real data are given. If interpretability if the main selling point of the architecture, I would expect a deeper analysis focused on interpretability.\\n- Presentation. Please consider moving the DLGN model diagram to the main text: it\\u2019s difficult to understand the model architecture from the formulas alone. Also, please try to stick to academic language, and avoid informal phrases like \\u201cwell nigh impossible\\u201d, \\u201chandily outperform\\u201d, \\u201csucceed comfortably\\u201d, \\u201cabout the same\\u201d, etc.\", \"questions\": [\"Authors suggest that they use a sigmoid instead of an indicator function to make the DLGN differentiable for training. Have authors considered using a temperature parameter for the sigmoid (potentially annealed during training), as common in other continuous relaxation methods?\", \"On lines 406-407 authors suggest that they use DBScan for clustering hyperplanes due to this algorithm being \\u201crobust to outliers\\u201d \\u2014 why do authors expect outliers to be a significant issue here?\", \"How do authors explain the observation that certain layers of the DLGN are more prone to matching the true discontinuities than others (Figure 2)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"First reply to reviewer tLyw (Part 3 of 3)\", \"comment\": \"Question 1: We indeed use a temperature hyperparameter, but freeze it to a constant during training. There is an interesting interplay between the temperature parameters and the initialization scale (i.e., for every temperature $\\\\beta$ and init $w$, there exists an appropriate scalar $\\\\gamma$ such that using temperature $1$ and init $\\\\gamma w$ gives the same optimization trajectory.\", \"question_2\": \"We expect many of the trained DLGN gating hyerplanes to be useless/irrelevant to the label function, these hyperplanes need not be close to any of the label function discontinuities. These hyperplanes would be the outliers.\", \"question_3\": \"The first layer of a DLGN does not change much during the training. This is an artefact of the parameterization where the gating hyperplanes are represented as product of the parameters that the gradient descent operates on. If one maintains the effective linear transform for each layer directly as parameters, this artefact vanishes. (This would correspond to the variant DLGN-SF in the paper).\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response. I maintain that I find the empirical observations interesting and of value, and hence I will maintain my score.\"}", "{\"summary\": \"This paper proposes a theory to explain how feature learning happens in neural networks. The authors posit that neural networks align (or rather, get attracted to) the discontinuites in the way the target label changes over the input domain. As a pilot test for this theory, the authors consider a setting where the target function to learn is an Oblique Decision Tree (ODT). ODTs correspond to decision trees where each internal node is a linear threshold. Thus, the hyperplanes associated to the nodes in the decision tree split up the input space into different labeled regions (see Figure 1). The hyperplane corresponding to the root is \\\"most discontinuous\\\" in terms of how the label changes on both sides of it (owing to further and further splits by its many descendants). The authors posit that the training of non-linear neural networks procedurally aligns the model with these discontinuities.\\n\\nAs a tractable model to further empirically study this theory, the authors propose a novel neural network architecture which they term Deep Linearly Gated Networks (DLGNs). DLGNs are somewhere in between standard nonlinear (e.g., ReLU activated) deep networks and deep linear networks. The function computed by a DLGN can be writen as a large summation of terms of the form $f_\\\\pi \\\\cdot g_\\\\pi$. Here, $f_\\\\pi$ is a product of the signs of activations of neurons in a deep linear network (a single neuron from each layer). $g_\\\\pi$ is a product of the activations themselves (a slight detail is that the weight matrices producing the activations in $f_\\\\pi$ and $g_\\\\pi$ are different). Thus, we can think of $f_\\\\pi$ as an indicator of the intersection of halfspaces, while $g_\\\\pi$ is the weight we add up if the indicator turns out to be true.\\n\\nThe authors fix the target function to be an ODT, and train the DLGN architecture on synthetic data labelled by the ODT. The suprising observation is given in Figure 2. At the end of training, if one plots the hyperplanes corresponding to the linear threshold at every neuron, one observes that the hyperplanes (at least in the later layers of the DLGN) align very well with the hyperplane splits in the ODT! Furthermore, Table 1 shows that most of these linear thresholds align with some hyperplane in the ODT. That is, the neurons in the DLGN architecture are \\\"getting attracted\\\" to aligning with some internal node in the ODT.\\n\\nThe authors next use this empirical observation to extract a decision tree out of a trained DLGN. Namely, they plot the linear thresholds corresopnding to all the activations after training, and then cluster these thresholds. The center of the largest cluster is chosen to be a root node hyperplane, and then we recurse this procedure on data on either side of this hyperplane. This procedure is pictorally well illustrated in Figure 3. Again, one can see that on performing such clustering-based decision tree generation, the first cluster center aligns very well with the root node hyperplane, and the phenomenon continues as we recurse. The upshot is that one can extract an interpretable model from the DLGN..\\n\\nFinally, the authors perform experiments on real-world classification data to illustrate that the classification accuracy of DLGNs is somewhere in between standard ReLU networks and other simpler non-neural network algorithms. Thus, DLGNs have the added benefit of being interpretable, but more powerful than stanard decision trees.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed theory is intriguing and indeed very interesting. The plots in Figures 2 and 3 are indeed striking---they illustrate how faithfully the DLGN neuron activations are increasingly aligning with the ODT hyperplanes over the process of training. These observations support the theory put forth by the authors about neural network architectures possibly picking up on the discontinuities in the labelling function. The clustering procedure to extract a decision tree is also very interesting, and empirically seems to work well (as suggested by Figure 3). Overall, I find the theory proposed by the authors, along with the striking empirical illustrations, very interesting. These could motivate further theoretical investigations for such phenomena.\", \"weaknesses\": \"The pilot experiments done by the authors are admittedly specialized. Namely, they don't actually consider standard ReLU networks, but only the DLGNs they propose. Furthermore, they only consider cases where the target function is conveniently an ODT. While this is totally okay as an initial starting point, it does raise the question about whether such empirical phenomena of the neurons aligning with the discontinuities also arise in cases where the target function has discontinuities of a different nature (like curvy discontinuities, etc). But this is not a significant weakness, as it seems beyond the scope of a pilot study. But it would be really interesting to visualize the activations of the neurons (say with quadratic thresholds) for when the target function is also composed of curvy discontinuities.\", \"questions\": \"1) Why do you even introduce a manifold $M$ in your notation in line 133? It is never used\\n2) On line 192, I don't think I agree that \\\"no other hyperplane other than the internal nodes has this property\\\". e.g. consider any other hyperplane that is not one of the internal nodes in the plot in Figure 1; doesn't it also have points of both labels on either side of it?\\n3) You introduce this formal notion of $\\\\gamma(R, f)$ as the local discontinuity coefficient in Section 3, but then never mention it anywhere later in the paper. How would you explain the results in say Figure 2 and 3 in the context of this quantity? Could you elaborate on this a bit?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes the deep linearly gated network to analyze feature learning in deep neural networks. It highlights how label function discontinuities guide model function discontinuities during training, using a novel architecture for interpretable results. Contributions include mechanistic insights into feature learning, a direct link to decision tree extraction, and competitive performance on synthetic and real-world datasets.\\n\\nThe novelty of the concept proposed by this paper is highly appreciated. The clear presentation also adds to its contribution. Further extensions and more rigorous experimental validation are noted, but the paper is judged to be in satisfactory form at this time.\", \"additional_comments_on_reviewer_discussion\": \"tLyw acknowledged its novelty but noted the importance of some concepts; D8rp appreciated its contribution and clarity; n82d acknowledged its contribution but raised concerns about extensibility, etc.; xQ4v appreciated the interesting nature of the study but noted the limitations of the experiment. In any case, the overall evaluation was high.\"}", "{\"title\": \"Response\", \"comment\": \"The response resolves most of my concerns.\\nI will raise the score to 6.\"}", "{\"title\": \"First reply to reviewer tLyw (Part 1 of 3)\", \"comment\": \"Thank you very much for the detailed review. Responses to the review below. Some of these arguments are explicit in the paper while the rest are implicit. We will add extra remarks to make the implicit statement explicit in a revised version.\\n\\n1. **DLGN as a tree reparameterization (point in the strengths section)** : This might be a misunderstanding, as the DLGN is NOT interchangeable with a decision tree. We do give a procedure for converting a trained DLGN into a decision tree based on the properties of its learning dynamics but an arbitrary DLGN cannot be converted into a decision tree. A core reason for the difference is that a data point can pass through any number of the m^L paths of a DLGN and the output is a sum of individual path values, as opposed to decision tree having a unique path for every input.\\n\\n\\n2. **Lack of focus**: Section 5 and 6 are closely related because the \\u201cclustering\\u201d of learned features in a trained DLGN (Section 5) is a crucial property exploited in the decision tree construction (Section 6). Admittedly Section 5 could stand on its own and be expanded further without Section 6. But Section 6 plays an important role in making the DLGN a unique architecture. It demonstrates that the DLGN model allows for more control by the learner. For example, (say) a ReLU network model cannot be converted to any other known model type other than via painful retraining on surrogate data labeled by the ReLU network. \\n\\n\\n3. **Model finding label discontinuities is not surprising** : Broadly, yes, we agree. The title of the paper is perhaps a bit too simplistic. But the internals do more and give an alternative view of the term features itself and gives us the machinery and vocabulary to ask more meaningful questions. I am not sure the discontinuities can be called decision boundaries, as these have limited scope, e.g. consider the line corresponding to the root node 0 in Figure 1, the label function is only discontinuous along some sections of it. We have recently become aware of a concept known as the Gaussian surface area (from this year\\u2019s COLT best paper) which is very closely related to the notions that we have in mind and might play a crucial role in future work. Next two points are also related to this comment.\\n\\n4. **Potential future lines of discussion being opened up**: The DLGN model is explicitly a weighted sum of product of halfspace indicator functions ($m^L$ in number) and hence we are somewhat justified in calling the individual halfspaces (only $mL$ in number) as features of the model. The halfspaces in the gating network seeking out the label function discontinuities (as demonstrated in Table 1) gives us a way to ask questions regarding parameter efficiency etc. Clearly, there is no need for multiple DLGN halfspaces to go towards the same discontinuity, but gradient descent (which is a greedy procedure) does it anyway. Maybe it would be possible to exploit this and have learning routines go beyond the greedy nature of gradient descent and build smaller models that perform similarly (say, identify two different gates going towards the same halfspace and pushing away one of them). This is however beyond our scope right now, but the very fact that such discussions are possible is the main point of this paper.\\n\\n5. **What is the secret sauce in deep networks?**: Another aspect that the DLGN opens up is questioning the source of power in a deep network. It has been taken as an article of faith that this is due to composition of multiple simple layers. But this paper postulates that the power of a deep network might be due to the fact that it learns separate features that interact (through a product). This also sheds some light on the mechanism by which training works in deep learning. The existence of multiple layers enables the \\u2018local\\u2019 nature of the discontinuity finding \\u2013 each layer operates in the context set by other layers, and hence, discontinuities that might be minor in a global context can become significant and drive the learning process. (Lines 61 to 65 in the paper). More concretely, in a data labeled by a COB-ODT, the root node halfspace would not be seen as a good direction for any gating hyperplane to move towards when seen globally. But when the data is restricted to random convex polyhedra (by say all gating neurons in a path except layer $\\\\ell$) the gating function for the layer $\\\\ell$ neuron in the path might see that the hyperplane corresponding to the root node is a good separator and move towards it. The fact that there are a huge number of paths ($m^L$) means that this is likely to happen with at least a few paths.\"}", "{\"title\": \"Response to reviewer D8rp\", \"comment\": \"We thank the reviewer for the helpful feedback.\\n\\n1. In a fuller version of the paper, we did have a paragraph connecting the manifold to DL architectures which we removed for brevity. The manifold still exists in the current version for reasons of generality. The definition of label discontinuties is applicable to any manifold while we study only hyperplanes. The DLGN model discontinuities are indeed just hyperplanes, but the geometry and parameterisation of the discontinuity manifold in a model is one of the core architecture design principles -- e.g. ReLU networks have \\\"bent hyperplane\\\" discontinuity manifolds, and CNNs have a more complex bouquet of bent hyperplanes instead of a single bent hyperplane due to the parameter tying and convolutional nature of the weights. We do not have any immediately insightful comments on why these structures should give better models on real data than simple hyperplanes like DLGNs.\\n\\n2. $\\\\mathbf{u}^1_{i_1}$ is a scalar.\\n\\n3. The computational cost is approximately twice the cost of forward pass of a single ReLU network of the same size. This is because of the way $g_\\\\pi$ is parameterised as a product of pairwise terms. A simple analogy is to see that a product of $L$ matrices of shape $m \\\\times m$ can also be written as sum over all paths $\\\\pi \\\\in [m]^L$ but clearly matrix multiplication is not exponential complexity in $L$. See below for a more detailed explanation.\\n\\n4. The model output can be simplified as $y(x) =. {\\\\mathbf{u}^1}^\\\\top D^1(x) U^2 D^2(x) \\\\ldots U^L D^L(x) \\\\mathbf{u}^{L+1}$ where $D^\\\\ell(x)$ is a diagonal matrix corresponding to whether the nodes in the gating network in layer $\\\\ell$ are active or not for input $x$. This is essentially just multiplying $L$ matrices and hence has the same complexity as the forward pass of a ReLU network. We will add a short proof of the above statement in the appendix of the revised version.\"}", "{\"comment\": \"Thank you. This response addresses my concerns. I look forward to the revised version.\"}", "{\"title\": \"Response to reviewer n82d (Part 1 of 2)\", \"comment\": \"We thank the reviewer for the feedback.\\n\\n1. Thanks for the references, we will add a line discussing these as well in the revised version. These would squarely fall in the family of recent feature learning literature that aims to go beyond NTK.\\n\\n2. **Link between DLGN and DNN** : The tale behind the birth of the DLGN is rooted in the ReLU network. A ReLU network can be represented as $ y(x) = W_L* D_L* \\\\ldots W_1* D_1 * W_0 * x $. Where the matrices $D_i$ are diagonal matrices that depend on $x$, taking values 1 or 0 depending on whether the corresponding neuron is active for the input $x$. When viewed as a function of $x$ the $i^{th}$ diagonal element of the matrix $D_l$ is rather complex, except for $l=1$ where it is simply equal to 1 on a halfspace and 0 outside it. DLGN simply makes the gating function for every neuron as a halfspace, whose parameters are given separately, i.e. in the above ReLU model output expression replace the diagonal matrices $D_\\\\ell$ with diagonal matrices containing $\\\\eta^\\\\ell$ as defined in the paper. \\n\\n3. **Other architectures**: Studying CNNs or transformers is beyond the scope of the current work, but in other work we have been able to adapt multiclass CNNs to DLGNs and get performance within a few percentage points of a comparable ReLU net on CIFAR datasets. We reiterate however that the goal of this paper is to initiate discussion and inspire experiments/hypotheses of learning mechanisms that would not even be possible with ReLU networks. (see response to reviewer tLyw for more details). Pasting a paragraph from that response here for convenience.\\n\\n4. **New lines of attack on feature learning** : The halfspaces in the gating network seeking out the label function discontinuities (as demonstrated in Table 1) gives us a way to ask questions regarding parameter efficiency etc. Clearly, there is no need for multiple DLGN halfspaces to go towards the same discontinuity, but gradient descent (which is a greedy procedure) does it anyway. Maybe it would be possible to exploit this and have learning routines go beyond the greedy nature of gradient descent and build smaller models that perform similarly (say, identify two different gates going towards the same halfspace and pushing away one of them). This is however beyond our scope right now, but the very fact that such discussions are possible is the main point of this paper.\\n\\n5. **Other contexts** : The main claim of model features seeking out label function discontinuities is general enough to handle other contexts in supervised learning. While, the notion of label function discontinuities would still be sensible for multi-class classification, we will have to generalize this to include \\\"high-slope\\\" regions for regression. For example, consider the paper \\\"Learning Hierarchical Polynomials with Three-Layer Neural Networks\\\" by Wang et al. where the label function has the form $h=g \\\\circ p$. Let the scalar function $g$ be such that its graph is mostly flat but has jumps at values 1,2,3. Then the label function discontinuties exactly correspond to the manifold p(x)=1, p(x)=2 and p(x)=3. The message in our paper would suggest that these manifolds will also appear in a trained deep network, which is indeed supported by a concrete theorem in Wang et al.\\n\\n6. **Message surprise** : In a sense as mentioned by reviewer tLyw this observation is not surprising -- any model that generalises has to discover these discontinuities in some form or the other. The complexity of the ReLU network architecture makes it so that one is unable to figure out where the representation of these discontinuities is hidden in a trained high accuracy model. The DLGN with its clear sum of product decomposition makes its discontinuities explicit. Wang et al. make similar observations on a ReLU network by having a scalar bottleneck layer. Such modifications to practical architectures are necessary to even quantify anything about the internal structure of learned DNNs.\"}", "{\"title\": \"Response to reviewer n82d (Part 2 of 2)\", \"comment\": \"7. **Need for novel approaches in feature learning** : All current theoretical papers on feature learning are rather restrictive in their assumptions. They require orders of magnitude more overparameterization than what is typically present in practical neural nets, they require the use of a non-standard gradient descent training procedure or a non-standard setup like a bottleneck layer e.g. Theorem 1 in Wang et al. Generalising these results to higher depth and getting meaningful insights into making training, inference or interpretation better for practice is a challenging task that has taken the community the better part of a decade with less than satisfactory results.\\n\\n8. **DL theory has all its eggs in one basket** : Almost all current theoretical feature learning papers leverage compositionality of the label function as a key structural component in both results and proofs. It is quite natural to think that deep networks, which themselves derive power via composition of simple functions, also learn features via composition but this has yet to be concretely proven. \\n\\n9. **Alternate possibility** : Our paper paper postulates an alternate possibility. The power of a deep network might be due to the fact that it learns separate features that interact (through a product in our case of DLGNs). This also gives a plausible mechanism of learning. The existence of multiple layers enables the \\u2018local\\u2019 nature of the discontinuity finding \\u2013 each gating neuron operates in the context set by other layers, and hence, discontinuities that might be minor in a global context can become significant and drive the learning process. (Lines 61 to 65 in the paper). More concretely, in a data labeled by a COB-ODT, the root node halfspace would not be seen as a good direction for any gating hyperplane to move towards when seen globally. But when the data is restricted to random convex polyhedra (by say all gating neurons in a path except layer $\\\\ell$) the gating function for the layer $\\\\ell$ neuron in the path might see that the hyperplane corresponding to the root node is a good separator and move towards it. The fact that there are a huge number of paths means that this is likely to happen with at least a few paths.\"}", "{\"summary\": \"In this paper, the authors propose a mechanism to intuitively explain why neural networks can surpass kernel methods through feature learning. They hypothesize that the feature learning process involves the alignment of model function discontinuities with label function discontinuities during training. To explore this, they introduce a new network architecture called the Deep Linearly Gated Network (DLGN), designed as a surrogate for ReLU networks. They argue that this architecture retains similarities to ReLU networks while offering easier interpretability. Under this framework, they provide empirical evidence showing how model function discontinuities move toward label function discontinuities during training, facilitated by feature learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper is a bold attempt to deepen our understanding of feature learning in deep learning. The hypothesis, data setup, network architecture, and approach to interpretability are unconventional, and they bring a fresh perspective to the literature. This work has the potential to inspire new ideas and serve as a valuable starting point for further exploration.\", \"weaknesses\": \"From my perspective, while this work is intriguing, it is not yet ready for formal publication. My concerns are as follows:\\n\\n1. The authors reference previous studies examining the dynamics of single hidden layer models under specialized data and settings to push beyond kernel methods or deep linear models (Damian et al., 2022; Ba et al., 2022). They appropriately note that these analyses, often focused on specific data settings like the parity function, fall short of addressing the needs or behaviors of deeper networks. However, numerous works also investigate complex feature learning with deep neural networks, such as https://arxiv.org/abs/2305.06986 and https://arxiv.org/pdf/2311.13774. These studies should be acknowledged and compared with the current work.\\n\\n2. I am skeptical about the extent to which the new architecture resembles a ReLU network. Additionally, it is unclear how this intuition or design could extend to CNNs or transformers. Besides, the current version only applies to binary classification, whereas a universal feature learning mechanism should ideally apply across setups, such as binary classification, multi-class classification, and regression. I am unsure how the proposed intuition extends to these broader contexts. For example, existing papers (such as the two mentioned above) demonstrate that neural networks can efficiently learn $h = g \\\\circ p$ with $p$ quadratic and $g$ nonlinear via feature learning. How would this intuition explain such cases?\\n\\n3. In Section 3, the introduction to ODT progresses too quickly. The paper would benefit from a more mathematically detailed introduction to the new concepts presented in this section.\", \"questions\": \"See the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer xQ4v\", \"comment\": \"We thank the reviewer for the helpful feedback. Response to questions below. Points 1 and 2 are for question 1, points 3,4 and 5 are for questions 2 and 3.\\n\\n1. In a fuller version of the paper, we did have a paragraph connecting the manifold to DL architectures which we removed for brevity. The manifold still exists in the current version for reasons of generality. The definition of label discontinuties is applicable to any manifold while we study only hyperplanes. The DLGN model discontinuities are indeed just hyperplanes, but the geometry and parameterisation of the discontinuity manifold in a model is one of the core architecture design principles -- e.g. ReLU networks have \\\"bent hyperplane\\\" discontinuity manifolds, and CNNs have a more complex bouquet of bent hyperplanes instead of a single bent hyperplane due to the parameter tying and convolutional nature of the weights. We do not have any immediately insightful comments on why these structures should give better models on real data than simple hyperplanes like DLGNs. \\n\\n2. Conceptually, we could have DQGN (deep quadratic gated network) in which the gating function for each neuron is an indicator of a quadratic function instead of a linear function. We could do that if we have a strong belief that the true label function can be compactly represented by just a few piecewise quadratic discontinuties instead of (say) millions of piecewise linear boundaries. \\n\\n3. Fair point regarding other hyperplanes. We need to restrict the region $\\\\mathcal{R}$ to reasonable sets for this to make sense. e.g. we should only consider those sets $\\\\mathcal{R}$ that are contiguous and intersect with both sides of the manifold $f(x) =0 $. Otherwise the conditional expectation in the expression would be vacuous. Under such reasonable restrictions, only the internal node hyperplanes (given by some f(x)=0) have non-trivial regions $\\\\mathcal{R}$ such that $\\\\gamma(\\\\mathcal{R}, f) = 1$. Informally, it says that any input $x \\\\in \\\\mathcal{R}$ and a slightly perturbed input $x+\\\\delta$ have different labels. The perturbation $\\\\delta$ is chosen normal to the manifold $f$. We will fix this issue in the revised version.\\n\\n4. The definition of $\\\\gamma(\\\\mathcal R, f)$ was given for the purpose of quantifying the level of different discontinuities in the label function. e.g. all the internal node hyperplanes in an ODT labelling function are clearly label function discontinuties in the informal sense, but how does one quantify the intuition that the root node hyperplane is a bigger discontinuity than a leaf node hyperplane? So we bring in a notion of \\\"scope\\\" of discontinuity. We define the scope of a discontinuity manifold $f$ to be the largest region $\\\\mathcal{R}$ such that $\\\\gamma(\\\\mathcal{R}, f) = 1$. Informally, it is the region in the input space in which an input $x$ and a slightly perturbed input $x+\\\\delta$ with perturbation $\\\\delta$ normal to the manifold $f$ have different labels. Clearly, the root node hyperplane has a larger discontinuity scope than the leaf node hyper planes. We will add a figure illustrating this in the appendix. Also, this intuition really makes sense only in higher dimensions and with COB-ODTs, where most pairs of hyperplanes are normal to each other and hence the figure (which is in 2d and does not correspond to an ODT with orthogonal node hyperplanes) could be misleading.\\n\\n5. The reason we even study discontinuity scopes is because of an empirical observation. In large trained DLGN architectures with COB-ODT labelling functions, disproportionately many DLGN gating hyperplanes go to the ODT root node hyperplane, while only a few of them move towards the leaf node hyperplanes.\"}", "{\"title\": \"Feature Learning Narrative Directions from DLGN. (Added in Appendix A7 of revision)\", \"comment\": \"We believe that the feature learning insights drawn from DLGN are applicable in general. More importantly, we believe that it can shed light on several phenomena that remain mysteries with deep networks. Some of the mysteries and candidate answers motivated from this paper are given below.\\n\\n**Q1: Is neural network training using gradient descent parameter efficient?**\", \"a1\": \"No. The greedy nature of gradient descent, pushes multiple interconnected parts towards the same feature when it is potentially not required. e.g. In Table 4 the distances of learned DLGN hyperplanes (with width m = 100) to the ODT hyperplanes are given. While the root node hyperplane is discovered separately by multiple gating hyperplanes, some of the other internal nodes are left in the lurch. This causes poor generalisation. Potentially, this narrative can be exploited to make gradient descent less greedy.\\n\\n**Q2: How does gradient descent discover the label function discontinuities in high dimensional input space?**\", \"a2\": \"The full labelling function may be far from being a linear separator, but when data is restricted within a small enough scope, one of the manifolds making up the decision boundary could very well be a good classifier (this argument uses the insight that even in high dimensional data, if it is linearly separable, picking up the separator can be done in a compute and data efficient manner). This causes that manifold to be picked up by one of the components of the deep network, and thereby making the rest of the learning problem easier and kickstarting a virtuous cycle.\\n\\n**Q3: Why does neural network pruning enable better learning of smaller architectures than training the smaller architecture from scratch?**\", \"a3\": \"We assume effective learning happens only in problems where the number of label function discontinuities is small. Significant parts of the neural network are simply not necessary to represent the discontinuities, which can be done by a small fraction of the trained deep network model. A large width network is still better for training, because (say) doubling the number of neurons per layer increases the number of paths by a factor of $2^L$. This increases the chances of some path picking a right scope during training enabling the learning of the appropriate label function discontinuity. Once the learning is complete, a significant fraction of the network which were unlucky to not have gotten a good scope can be removed.\\n\\n\\n\\n**Q4: What is the role of layers in deep networks? Is increasing the number of layers always beneficial?**\", \"a4\": \"The main role of layers is to give context/scope to other layers. For example, with a depth 4 ODT labelling function, a DLGN with 3 or lesser number of hidden layers would not be able to give a good scope to any neuron. A DLGN with 5 or more layers is just unnecessary. This is reflected in our experiments as well, where depth 4 DLGN performed best for a depth 4 ODT labelling function.\"}", "{\"title\": \"Summary of changes to the revised version\", \"comment\": \"We thank the reviewers for the detailed comments. We have made the changes asked for by the reviewers in the revised version. Here is a summary of the changes:\\n\\n1. Put appendix and main paper in the same pdf.\\n2. Reduced some of the informal terminology in the main paper.\\n3. Fixed some errors in the figure in Appendix A.1 and made it more legible.\\n4. Added minor clarification to the local discontinuity coefficients and added a figure and discussion of the coefficient in Appendix A.4\\n5. Added proof for DLGN computation, i.e. equivalence of equation 1 and 5 in Appendix A.5\\n6. Added a short section in the appendix detailing the relation of DLGN to ReLU networks in Appendix A.6\\n7. Added a new section in the Appendix (A.7) discussing some deep learning phenomena and the candidate answers given by the feature learning narrative developed in the paper.\"}", "{\"title\": \"Reply to follow-up\", \"comment\": \"Thanks for the reply.\\n\\nWe see the reviewers' concern about applicability of the message in the paper to practical neural net architectures like the ReLU network. Below, we make an attempt to address this concern.\\n\\nThe DLGN is a surrogate architecture designed for a more detailed analysis of the feature learning mechanism in the learning of deep non-linear models by gradient descent. Thanks to the mathematical complexity of deep ReLU networks, all progress on our understanding of the learning mechanism is from surrogate architectures/algorithms which are not fully faithful to the original. An incomplete list follows:\\n\\n1. Neural Tangent Kernel approaches: Theoretically, these require a large number of hidden neurons (a high degree polynomial in number of data points). It is not clear that any of the findings in this literature can be applied directly to practical ReLU nets, but they are generally acknowledged as giving a valid first-order understanding of the learning mechanism. \\n\\n2. Deep linear networks: Claims to represent optimisation dynamics of deep networks, but is incapable of capturing feature learning. \\n\\n3. Two layer architectures/Bottlenecks/Modified gradient descent: There is a host of literature on this line trying to explain the outperformance of GD on deep networks over kernel methods, but under a restricted two layer setting with a modified version of gradient descent.\\n\\n4. Neural network gaussian processes: Does no gradient descent at all, and uses Bayesian conditioning instead.\\n\\nNone of the above lines of work yield (currently) useful learning procedures that are actually deployed. None of the insights they generate are proven to apply to the more practically used deep architectures. Nonetheless, we value these results because it potentially enables the analysis of current algorithms or design of new and better learning algorithms.\\n\\nThis brings us to a key point. The machine learning community is not really interested in unravelling the secrets of ReLU networks. It is more concerned with finding out the essence of what makes them work and hopes to find something better (in terms of train/test space/time complexity, more control and interpretability etc). This is the reason why alternate surrogate architectures are valuable. Each such architecture claims that the core reason for the success of deep networks is captured by them. Our work is no different in this sense. We propose the DLGN architecture motivated by the ReLU net (see point 6 in our response above) and claim that the core reason behind the success of deep networks (see point 5 in our response above) is captured by it.\", \"there_are_two_core_claims_in_the_paper\": \"1. DLGN learns separate features that interact through a product. The existence of multiple layers enables the \\u2018local\\u2019 nature of the discontinuity finding \\u2013 each layer operates in the context set by other layers.\\n\\n2. ReLU networks and other deep architectures deployed in practice behave similarly.\\n\\nWe provide some support for the first claim, and just appeal to the similarity of the DLGN to ReLU nets for the second. \\n\\nBy the very nature of the goal for designing surrogate architectures, an appeal to similarity is all that is possible for the second claim (which is made implicitly or explicitly in all the architectures listed above). \\n\\nThe value of our work lies in its ability to give a narrative to multi-layer learning beyond the usual 2 hidden layer theory. None of the other surrogate architectures mentioned above have such an ability. While admittedly we do not have an impressive looking theorem proving this narrative, it is possible to do so if enough extra assumptions are made (like for example optimising over $g_\\\\pi$ directly, and using a block alternate minimization procedure where each weight vector is optimised while keeping others fixed) which would make the value of such a Theorem questionable.\\n\\nAn example of the value of the narrative above is given below. We could find sub-optimal behaviours of gradient descent, and potentially design novel algorithms beyond gradient descent. For example, consider the new Table 4 (which is in the same setting as Table 1, but with a wider DLGN architecture of $m=100$). While this architecture also attains a high train accuracy, it performs worse than the $m=20$ architecture on the test data. It is easy to attribute this to \\u201coverfitting\\u201d and say that $m=20$ is the right hyperparameter choice. This is the only possible thing to do in an opaque architecture like the ReLU network. But in the DLGN model with ODT label function setting, we can actually construct something like Table 4 and see that several of the internal node hyperplanes are not faithfully captured and the root node discontinuity is overrepresented. Thus we can give a deeper explanation of the failure than the surface level explanation of \\u201coverfitting\\u201d. \\n\\nWe have added a few other deep learning phenomena candidate explanations to the Appendix.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Follow-up on some of the authors' points\", \"comment\": \"> DLGN is not a decision tree.\\n\\nI agree, but I would argue it is an alternative parameterization of a _soft_ decision tree ([Soft Decision Trees](http://www.cs.cornell.edu/~oirsoy/files/icpr21.pdf). _O. Irsoy, O. T. Yildiz, E. Alpaydin_. ICPR 21), which also computes a weighted combination of all paths in a tree. \\n\\n> Section 6 plays an important role in making the DLGN a unique architecture. It demonstrates that the DLGN model allows for more control by the learner. It demonstrates that the DLGN model allows for more control by the learner. For example, (say) a ReLU network model cannot be converted to any other known model type other than via painful retraining on surrogate data labeled by the ReLU network.\\n\\nI am not convinced demonstrating the ability to convert a DLGN to a decision tree helps the paper's goal of explaining feature learning in deep neural networks, given authors' claim that the \\\"utility of the DLGN architecture is in being able to test our hypotheses regarding feature learning\\\".\\n\\n> I am not sure the discontinuities can be called decision boundaries, as these have limited scope, e.g. consider the line corresponding to the root node 0 in Figure 1, the label function is only discontinuous along some sections of it.\\n\\nI believe this does not contradict the definition of a _non-linear_ decision boundary. And, again, the claim that for a model to do well on a classification task it has to capture decision boundaries is almost tautological. \\n\\n> But this paper postulates that the power of a deep network might be due to the fact that it learns separate features that interact (through a product).\\n\\nSee the point below.\\n\\n> Based on our current understanding of ReLU networks, it is not possible to show or check if a ReLU network discovers these discontinuities. (Where would we even look for these in the parameters of the DNN?). That we cannot demonstrate even such a non-surprising conclusion for the ReLU is exactly the reason we study the DLGN instead. In short, the significance of DLGNs is that there are no surprises and things are not \\u201chidden\\u201d/inscrutable.\\n\\nContinuing the point above, that's precisely what I see as a significant weakness of the current narrative. We see that this happens in a DLGN, but how can we be sure something like this happens in a ReLU network? Discussion on the connection between a DLGN and a ReLU network that authors provide in a separate paragraph is a start, and expanding on it, as well as showing how this connection suggests ReLU might demonstrate comparable behavior, would make the paper much stronger.\\n\\nI maintain my score for now.\"}", "{\"title\": \"First reply to reviewer tLyw (Part 2 of 3)\", \"comment\": \"6. **Link between DLGN and DNN** : The tale behind the birth of the DLGN is rooted in the ReLU network. A ReLU network can be represented as $ y(x) = W_L* D_L* \\\\ldots W_1* D_1 * W_0 * x $. Where the matrices $D_i$ are diagonal matrices that depend on $x$, taking values 1 or 0 depending on whether the corresponding neuron is active for the input $x$. When viewed as a function of $x$ the $i^{th}$ diagonal element of the matrix $D_l$ is rather complex, except for $l=1$ where it is simply equal to 1 on a halfspace and 0 outside it. DLGN simply makes the gating function for every neuron as a halfspace, whose parameters are given separately, i.e. in the above ReLU model output expression replace the diagonal matrices $D_\\\\ell$ with diagonal matrices containing $\\\\eta^\\\\ell$ as defined in the paper.\\n\\n7. **Can we say same statement about ReLU networks?**:Based on our current understanding of ReLU networks, it is not possible to show or check if a ReLU network discovers these discontinuities. (Where would we even look for these in the parameters of the DNN?). That we cannot demonstrate even such a non-surprising conclusion for the ReLU is exactly the reason we study the DLGN instead. In short, the significance of DLGNs is that there are no surprises and things are not \\u201chidden\\u201d/inscrutable.\\n\\n8. **DLGNs utility**: We don\\u2019t make the claim that DLGN is a better architecture that can be applied right now. The utility of the DLGN architecture is in being able to test our hypotheses regarding feature learning. Something that is not possible with deep linear networks (because they are essentially still linear) and ReLU networks (they are too complex). The experimental results are merely to establish that DLGNs outperform kernel methods, and hence learn non-trivial features. That they are competitive with random forests is merely an add-on bonus.\\n\\n9. **DLGN interpretability**: The source of complexity in ReLU nets lies in composition of nonlinear functions. Also, there is no meaningful notion of an \\u201cindependent part\\u201d in a ReLU network \\u2013 scalar weights connecting neurons are meaningless by themselves, and neurons as real valued functions over the input space require almost as many parameters to describe as the entire network itself. In the case of DLGN the source of power and complexity is just a sum of product of simple functions. While there are exponentially many ($m^L$) features, this complexity is due to a structured combination of merely $mL$ features. Also, each gating neuron is a meaningful part of the DLGN that can be compactly described as a halfspace in the input domain. This explicit sum of products nature of the DLGN makes it so that it can be almost considered a white box model.\\n\\n10. **Tangential comment on the current state of interpretability** : We are not particularly enthusiastic about the current paradigms of interpretability (ala GradCAM, LIME, SHAP etc) which go to the extent of making the explanation itself as a prediction. The goodness of a prediction is judged by how it is found subjectively satisfactory rather than how faithful to the model the explanation is. We favor a more deconstructionist approach where we say \\u201cwe understand something only if we can break it apart and put it together again\\u201d. The rise of *mechanisitc interpretability* (we are not fully sure of what an appropriate definition of this would be) in current literature suggests that current ML researchers are cognizant of this issue.\"}" ] }
52Idqv2FNY
Correlating and Predicting Human Evaluations of Language Models from Natural Language Processing Benchmarks
[ "Rylan Schaeffer", "Punit Singh Koura", "Binh Tang", "Ranjan Subramanian", "Aaditya K Singh", "Todor Mihaylov", "Prajjwal Bhargava", "Lovish Madaan", "Niladri S. Chatterji", "Vedanuj Goswami", "Sergey Edunov", "Dieuwke Hupkes", "Sanmi Koyejo", "Sharan Narang" ]
The field of natural language processing (NLP) historically evaluated language models using benchmarks with automated metrics. However, the recent advent of highly capable chat language models (LMs) has caused a tectonic shift from NLP benchmarks to human evaluations. The relationship between these two evaluation processes is unclear and underexplored for chat LMs. Broadly, to what extent are human evaluations and NLP benchmarks correlated with one another? How well can computationally inexpensive and automated benchmarks predict expensive and time-intensive human evaluations? Which benchmarks provide predictive signals for human preference for LMs? What role, if any, should benchmarks play in the era of chat LMs? To answer these questions, we conducted a large-scale study of the relationships between human evaluations and benchmarks. We show that benchmarks are broadly highly correlated with human evaluations, and we identify which benchmarks exhibit strong correlations with human evaluations and which do not. Having established that reliable correlations exist, we fit models to predict a language model’s human evaluation scores from its academic evaluation scores and provide evidence that such predictive models can generalize across LM scales.
[ "language models", "evaluations", "human evaluations", "benchmarks", "NLP benchmarks" ]
Reject
https://openreview.net/pdf?id=52Idqv2FNY
https://openreview.net/forum?id=52Idqv2FNY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zZOMpH1eXB", "wd45NoWT7O", "voT88BlVn9", "vWoLjadCay", "u0qCfmLdAg", "sQKNukmK1y", "oqUDEGnPsN", "l17xywYObH", "dzhuSrCErK", "dj4UINCdXk", "Voc4KKYC5k", "QEmNpcIxsU", "NiOdBl7Qas", "NMT5zhKYXG", "IZWvTWR6rm", "EWnCLfai3a", "DV06H4OqGP", "DKdRNDNoNM", "CmgLXDNrEs", "CXztsZYEaJ", "AYUuGyk9ek", "9CWImIPkXe" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733201443024, 1730614005910, 1732579889681, 1732851789763, 1733200908064, 1733212945035, 1730707220571, 1732649726190, 1732316321050, 1737523924719, 1733200039208, 1733206649387, 1730186824525, 1733202139177, 1734532694893, 1733212911356, 1730720189677, 1732583289221, 1732583309531, 1733198751885, 1732580661739, 1732579918740 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_MpHR" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_evg9" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_Khx9" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_Khx9" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_evg9" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_sJ5t" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_Khx9" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_Khx9" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Area_Chair_E6kR" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_Khx9" ], [ "ICLR.cc/2025/Conference/Submission8665/Reviewer_sJ5t" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ], [ "ICLR.cc/2025/Conference/Submission8665/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer MpHR\", \"comment\": \"Thank you for your review. We appreciate that you felt the research topic of this work is meaningful. To address the concerns you raised:\\n\\n> their experimental analyses remain confusing and fail to help readers capture the main points.\\n\\n> From Figure 1 onward, the clarity and readability of the charts decline rapidly, and by Figure 6, it becomes nearly impossible to extract any information as the fonts are extremely small and the visualized results are poorly presented.\\n\\nThis is a valid criticism that we significantly improved in our revised and resubmitted manuscript. To highlight key changes:\\n\\n- We better described our analyses in the main text (Section 3)\\n- In case that was inadequate, we also created a new Appendix A detailing our experimental methodology including both data and analyses\\n- We added Appendix B with basic analyses of our data to provide additional information to readers\\n- We created new visualizations (specifically Figures 3, 5 and 7) that hopefully are more easily read and understood\\n- We removed old visualizations that you and other reviewers felt did not add value and perhaps even subtracted value\\n- We also cleaned up Figure 6 (but embarrassingly forgot to include it in our final submitted manuscript) to make the text more legible\\n\\n> For example, in line 149, what does the \\\"evaluation process\\\" refer to\\n\\nAn evaluation process is the benchmark (e.g., MMLU), a possible subset (e.g., College Mathematics), plus any additional information necessary to specify how models are scored such as: the metric (accuracy, ROUGE-2, ROUGE-L, pass@k, f1 etc.), 0-shot or few-shot (and if few-shot, how many shots), whether answers are sampled from the model (and if so, how many), whether Chain-of-Thought reasoning is used, etc. We have clarified this in the main text and in our next Appendix A.\\n\\n> why are approximately 150 combinations calculated in total?\\n\\n~150 on Line 148 is the number of NLP benchmark scores per model. The specific number is 160. This number arises because some NLP benchmarks have multiple subsets that we do not aggregate over. For instance, ARC has two subsets (ARC-Easy and ARC-Challenge), so each model receives two scores on the ARC benchmark. We stated this on Line 138: \\u201cSome of these benchmarks (e.g., MMLU) contain subsets (e.g., Jurisprudence) that we treat individually.\\u201d\\n\\nWe clarified this in the main text and added Table 1 in Appendix A.1 to state exactly what benchmarks, subsets, metrics, few shot and generations we use.\\n\\nWe would welcome language to help us better communicate this point.\\n\\n> Additionally, if I understand correctly, it seems unfair to compare human evaluation results across mixed task types with different NLP automatic evaluation benchmarks that may focus on testing certain different abilities.\", \"we_added_a_paragraph_to_explain_that_this_was_an_intentional_decision_and_to_motivate_why\": \"In this work, our aim was specifically to identify which NLP benchmark scores are predictive of human preferences on open-ended prompts representative of real-world chat model usage. We chose this approach to maximize the ecological validity and generalizability of the findings to real-world use cases. For a concrete example, we may want our chat language models (LMs) to excel at providing bespoke career advice; which NLP benchmarks provide useful signals for whether models are improving at such tasks?\"}", "{\"summary\": \"This work attempts to explore the correlation or consistency between common NLP automatic evaluation benchmarks and human evaluations in analyzing and comparing the capabilities of language models. They cover a wide range of datasets and conduct experiments on four different sizes of Llama 2 models and GPT-3.5, employing human annotators to provide evaluation data. They find that there is a high correlation between automatic benchmarks and human evaluations, and they identify which benchmarks show stronger correlations. Furthermore, they also fit models to predict human evaluation scores of language models from academic evaluation scores.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The motivation and research questions of this work are very interesting and significant. Considering that language models are becoming increasingly powerful, many traditional NLP benchmarks may have lost their discriminative power, leading researchers to turn to human evaluations, which are more costly and harder to reproduce. By analyzing the consistency between NLP automatic evaluation benchmarks and human evaluations, this work aims to identify highly consistent benchmarks to simulate human evaluations, thereby reducing evaluation costs. Their experiments cover a large range of datasets and settings, including constructed various categories of human evaluation data and many common NLP automatic evaluation benchmarks, demonstrating a very comprehensive effort.\", \"weaknesses\": \"Although the research topic of this work is meaningful, it is also actually very complicated and corresponds to a more challenging analysis process. Even though the work has tried to handle the experimental data and present corresponding results as macroscopically as possible, their experimental analyses remain confusing and fail to help readers capture the main points. From Figure 1 onward, the clarity and readability of the charts decline rapidly, and by Figure 6, it becomes nearly impossible to extract any information as the fonts are extremely small and the visualized results are poorly presented.\\n\\nSome analytical settings in the paper are unclear or somewhat unreasonable. For example, in line 149, what does the \\\"evaluation process\\\" refer to, and why are approximately 150 combinations calculated in total? What do they represent? Additionally, if I understand correctly, it seems unfair to compare human evaluation results across mixed task types with different NLP automatic evaluation benchmarks that may focus on testing certain different abilities.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Khx9 (Part 1)\", \"comment\": \"Thank you for your review! We are grateful to hear you write that our paper studies a very important problem and that the insights of our paper could help design NLP benchmarks that are more predictive of human evaluations.\\n\\nWe are working on improving the manuscript and should have a more complete version posted in 1-2 days.\\n\\nIn the interim, to address a subset of the concerns you raised:\\n\\n> The experiment parts are highly unclear and hard to comprehend. It is unclear how the correlations are calculated between human evaluation and NLP benchmark scores. There is even no Experiment Setup section in this paper\\n\\nWe are adding a detailed Experimental Methodology as Appendix A to describe our methodology in meticulous detail.\\n\\nAs an overarching comment, all of our NLP benchmark scores are computed in the \\u201cdefault\\u201d or \\u201cstandard\\u201d manner, e.g., as one would find in Meta\\u2019s Llama 2 paper, which is the work that our paper built on top of. Please see the new Appendix A.1; we will add more information about human evaluations today or tomorrow.\\n\\n> How do you aggregate the scores of different shots?\\n\\n> Why do you aggregate the results of different shots?\\n\\nTo clarify, we do not aggregate scores of different shots. Could you please point us towards where you read this? We\\u2019re unclear what in our manuscript gave you this misconception and we would like to correct whatever text gave this impression.\\n\\nTo clarify, as you may know, different NLP benchmarks are oftentimes evaluated with different numbers of shots. For example, we evaluate AGI 5-shot, BoolQ 0-shot, CommonSenseQA 7-shot, etc. We use whatever number of shots is considered \\u201cstandard\\u201d for each benchmark and do not experiment with these hyperparameters.\\n\\n> What is the number of shots?\\n\\nWe have added Table 1 to Appendix A detailing the number of shots for each benchmark. As you may know, different benchmarks are evaluated with different numbers of examples. The number of shots for each benchmark were chosen to match the Llama 2 paper and we did not explore the effects of changing the number of shots.\\n\\n> How is the prompt formatted?\\n\\n> How are the demonstrations in the few-shot selected?\\n\\nDemonstrations and prompts are selected and formatted in the \\u201cstandard\\u201d or \\u201cdefault\\u201d manner for each NLP benchmark. We followed the exact prompt selection and formatting from the Llama 2 paper.\\n\\n> Where does the number 150 on Line 148 (page 3) come from?\\n\\n~150 on Line 148 is the number of NLP benchmark scores per model. The specific number is 160. This number arises because some NLP benchmarks have multiple subsets that we do not aggregate over. For instance, ARC has two subsets (ARC-Easy and ARC-Challenge), so each model receives two scores on the ARC benchmark. We stated this on Line 138: \\u201cSome of these benchmarks (e.g., MMLU) contain subsets (e.g., Jurisprudence) that we treat individually.\\u201d We would welcome language to help us better communicate this point.\\n\\n> How is the human evaluation conducted? How many samples are there in the single-turn and multi-turn dialogues? How are the topics selected? What is the distribution of the data?\\n\\nThe human evaluations were conducted by contracting with a well-known data labeling company (redacted to preserve anonymity). The methodology for querying humans is described in lines 108 to 127. For single-turn evaluations, we have 1917 samples. Regarding how the topics were selected, as we stated on Line 123, \\u201cThis taxonomy was chosen to broadly cover common use-cases of Chat LMs.\\u201d \\n\\nRegarding \\u201cWhat is the distribution of the data?\\u201d, could you please be more specific? What exactly would you like to know?\\n\\n> If the paper only uses four models, is the correlation coefficient calculated using only the benchmark scores of 4 models and the human evaluation results of the models? This means we are only calculating the correlation coefficient between two sets for numbers with only four elements in each set.\\n\\nYes, this is correct. We stated this on line 149: \\u201cWe then computed three standard notions of correlation over the 4 average scores per model.\\u201d\\n\\n> There are only four models used in this paper: the four chat models in Llama-2 with different numbers of parameters. The abilities of those models are very distinct, so it is easier for human evaluators or NLP benchmarks to distinguish the strengths of these models. A more challenging and realistic scenario is to consider more LLMs whose abilities are more diverse.\\n\\nThis misunderstands the goal of our experimental design. The goal is to understand how human evaluation scores change as NLP benchmark scores change. Consequently, we want to see as much variance as possible because higher variance provides stronger signal for both correlating and predicting NLP benchmark and human evaluation scores. If we considered models with nearly identical scores, then this analysis would become much harder. By choosing models of different strengths, our analysis could be more robust.\"}", "{\"comment\": \"Thanks for your response. I still suggest you add more LLMs in addition to the Llama 2 family. I understand the cost of human evaluation, but you should plan carefully before conducting human evaluation: increasing the number of systems (LLMs) and decreasing the number of instances per system accordingly. Considering what connects the NLP benchmarks and the human evaluation in your study is only the four systems, it is hard to do other analyses.\"}", "{\"title\": \"Response to Author's Nudge\", \"comment\": \"Additionally, I would like to express my **strong objection** to the authors saying ***your score of 1 is unjustified***.\\nIn my original review, I justified my score using more than 350 words with clearly formatted bullet points. This is a number twice to four times what other reviewers have written in the weakness part. Of course, more words do not translate to a better review, but I am just saying that the amount of weaknesses I deem this paper to have is very significant. Those questions are what readers will ask when reading this paper, and the weaknesses I raised will surely be spotted by other readers. My review highlights the fatal weaknesses of the paper, spanning from **experiment soundness, significance and impact of the results, and presentation**. I provide questions that highlight why the paper is unclear and actually point out some places that are imprecise in the manuscript, which the authors acknowledge in their responses (e.g., wrong model size for Llama-2-34b, the number 150 in the paper). Those imprecise numbers create difficulty when a reader wants to reproduce the paper, and the job of the reviewers is to point them out. I believe this is what my review has done. All these weaknesses together justify the score I initially gave.\"}", "{\"title\": \"Responses to Authors' Responses (2/2)\", \"comment\": \"> Moreover, your initial score of \\\"1\\\" is, in general, extremely aggressive.\\n\\nI believe the score a paper deserves is highly subjective, and I already have been very very objective about this. I have listed reasons why the papers should be rejected. When adding them together, I think giving a score of 3 is not enough since it seems unfair for other papers that receive a 3, so I have no choice but to give this paper a 1. This paper does not have sound experiments, does not have a reasonable presentation, and does not have sufficient experiments to justify its main claim. Again, I feel unnecessary to argue whether this paper deserves a score of 1 since this is well and repeatedly justified in my review and responses.\\n\\n> Regarding the small model set: While four models may seem limited, they provide a controlled experiment across model scale while holding architecture and training constant. This allows us to isolate how performance changes with scale.\\n\\nNothing is resolved since I simply believe using only four models is not sufficient, agreeing with evg9.\\n\\n> Figure readability: We made a serious effort to (i) improve our figures, (ii) add additional figures and (iii) remove unhelpful figures. Rather than criticizing the figures, you could be significantly more helpful if you tell us how to improve our figures.\\n\\nThis response seems quite defensive and somewhat unpleasant for me when reading it. I also asked GPT-4o if this is polite, and this is what GPT-4o responded: *Your initial response might be seen as defensive because it indirectly suggests that the reviewer's critique of the figures might not be constructive (\\\"Rather than criticizing the figures, you could be significantly more helpful if you tell us how to improve our figures\\\"). This wording implies that the criticism wasn't helpful, which could be interpreted as a dismissal of the reviewer\\u2019s feedback.* So, back to the problem. The issues about the figures are still not fixed in the latest revision. Please answer the following question: Can a reader read the tiny words in Figures 5 and 6 when the paper is printed on an A4 paper? I cannot. My recommendation? Make the font larger. \\n\\n> Community detection methodology: We will add an appendix section to Appendix A detailing this methodology. To explain here, community detection is a standard approach in network science. In our context, we have a bipartite graph, where the two node sets are human evaluations and NLP benchmarks, and the edge weights are the correlations between nodes. To identify such communities, we turn to the most basic linear algebra primitive (i.e. SVD) and study the different singular modes.\\n\\nThis is another evidence that the paper is unclear. The paper, even in its revision, does not mention the term *cimmunity detection*. It is unlikely for a reader to know the community that suddenly appears here refers to that community in community detection. While I can guess what this means, and I know community detection, I don't think a good paper should make the readers guess what the paper is trying to say.\"}", "{\"summary\": \"The paper studies the relationships between the evaluation results of automated NLP benchmarks and those of human evaluation. It mainly revolves around two research questions: how well human evaluations and NLP benchmarks are correlated with each other, how well NLP benchmarks can predict human evaluation. Specifically, the authors develop a set of 1917 prompts organized by areas, categories, and subcategories, selects four LLMs from Llama 2 family, gets their reponses to the prompts, and conducts a large-scale pairwise human evaluation. The evaluation results of the four models on many automated NLP benchmarks are also derived. Then, the paper analyzes the correlations between human evaluation and automated NLP benchmarks and finds that they are highly correlated in most cases. Furthermore, the authors decompose the correlation matrix into rank-one components and demonstrate the communities between human evaluations and NLP benchmarks. Finally, the authors tried to fit a regression model to predict the human evaluations with automatic evaluation results as inputs.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The research question of this paper\\u2014the relationship between evaluation results from automated NLP benchmarks and human evaluations \\u2014is generally important and meaningful. Recently, numerous automated benchmarks and human evaluations have emerged separately, but there has been little research on the relationship between them.\", \"This paper covers many automated NLP benchmarks and includes a large-scale human evaluation, which lends a certain level of generality to its results.\"], \"weaknesses\": [\"Although the idea of this paper is beneficial, many obvious flaws diminish its value.\", \"This study uses only four LLMs, which is too few. This leads to\", \"The correlations between automated NLP benchmarks and human evaluation are calculated merely from two four-dimension vectors, which is unreliable\", \"Insufficient experiments for predicting human evaluation from automated NLP benchmarks, despite cross-validation conducted in the paper\", \"The paper lacks key details, including but not limited to how the prompt set used in human evaluation is obtained, the human evaluation process and its reliability (e.g. inter-annotator agreements), details of how the correlation is calculated (what is ~150 evaluation process?), the settings for linear regression. This not only creates difficulty in understanding but also raises doubt about the rigor of this study.\", \"The presentation of the paper could be improved. For instance, the font sizes in Fig 3, the upper part of Fig 4, and Fig 6 are too small, making it hard to read.\"], \"questions\": [\"More LLMs should be covered in this study. I understand the computational cost during inference and the cost in human evaluation, but four LLMs are definitely too few to support subsequent experiments.\", \"I do need more details of the human evaluation in your study. What makes me most confused is the selection of the prompts. Why don't you use the same question sets as those of automated NLP benchmarks? If there are too many, you can sample from each dataset. Now there is a mismatch between the prompts (questions) in human evaluation and automated NLP benchmarks and the mapping relationship is not clear. Even if ignoring the mismatch issue, you should provide the number of prompts per area and categories used in human evaluation.\", \"The experiments of one-rank decomposition in Section 3.3 need to be further explained. Can you better state your motivation of conducting this decomposition and what insights can we draw from that?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up\", \"comment\": \"I thank the authors for their thoughtful and prompt responses. I believe I have already scored at a level that marks the importance of the work, but any higher wouldn't be merited given its scope.\\n\\nThe related work is done well, and I would suggest a few nits that would be of great help if the choice of gpt3.5, as you have answered, is also written in the paper and motivates the approach given the backdrop of high costs. Maybe there are other ways to explore the generalizability of this work, even with limited samples, and I hope you explore them. Thank you for writing the paper.\"}", "{\"title\": \"Response to Reviewer sJ5t\", \"comment\": \"Thank you for reviewing our paper! We are grateful to read that you find the question at the heart of our paper is an important and central question to NLP evaluations, and that our work contributes important insights into these evaluations.\", \"to_address_a_subset_of_the_concerns_you_raised\": \"> The small sample size brings into question the generalizability of these insights and results.\\n\\nWe agree. Sadly, the human experiments are costly and slow and we ran into errors collecting human evaluations for additional models. We highlight this as a core limitation of our work but see no way around it.\\n\\n> Only uses GPT-3.5 as the comparative model, no insight is provided into why this is the case? And also lacks any discussion of whether chatgpt 3.5 is a reasonable choice of a baseline.\\n\\n> Why chatgpt 3.5? Could you justify the choice of this model? Why was chatgpt 3.5 the model chosen for comparison, is it a reasonable choice for a baseline?\\n\\nWe used GPT-3.5 because at the time this data was collected, GPT-3.5 was a good balance of three desirable properties for our study: (i) performant, (ii) cheap, and (iii) stable.\\n\\nWe feel the choice of baseline is not so critical because our goal is to assess how improvements in NLP benchmark scores correlate with and predict improvements in human evaluation scores. Thus, what matters is how models vary/improve. We acknowledge that multiple baselines would be ideal, but this was out of budget.\\n\\n> if these outputs were obtained from Chatgpt 3.5, which API was it received from, and what was the exact cutoff (e.g., ChatGPT-3.5-0604, etc.)?\", \"line_110\": \"gpt-3.5-turbo-0301.\\n\\n> Perhaps a granular analysis of what makes a benchmark more correlated? Is there something common in the correlated benchmarks? This would also pave the way to designing and determining better benchmarks.\\n\\nThis could be quite an interesting analysis! However, this would be beyond the scope of our paper. This would require trying to \\u201cfeaturize\\u201d benchmarks and then testing which features of benchmarks lead to correlations between benchmarks.\\n\\n> Just a general question about related work: is there no related work? (while this correlation aspect might not have been explicitly studied), Studies have considered which of them is better in MT, summarization, and other NLP areas. Can you provide a more comprehensive overview of related work, including studies that have compared human evaluations and benchmarks in specific NLP tasks like MT/Summarization, and contextualize this work in the broader field?\\n\\nThis is indeed a shortcoming that we will address. We will add a Related Work section.\\n\\nWe will update the manuscript in 1-2 days and address your remaining concerns then.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Author Responses\", \"comment\": \"Thank you for your responses.\\nHowever, I do not think the responses and modifications increase the quality of the manuscript to a level that can be accepted. I appreciate the authors trying to clarify where my misunderstandings are from. However, I cannot directly point out where in the manuscript gives me such misunderstanding as the original version of the paper really did not include too many details for me to understand, so I can only guess. It is not about what the paper writes but instead what the paper didn't write.\\n\\nGiven that the experiment settings are somewhat clearer in the revised version, I can increase my score to 3. However, **fatal weaknesses** remain in the paper, including (1) only using four Llama models whose strengths vary a lot; this is a very limited study to draw the conclusion that **benchmarks correlate well with human evaluation**. While the authors say this is a misunderstanding of their work, I don't believe this is a misunderstanding, as the abstract claims that *\\\"benchmarks are broadly highly correlated with human evaluations\\\"*. I want to point out that drawing a conclusion on only these four models is not convincing enough. This is also pointed out by another reviewer. (2) The figures are still not readable. I do not think such formatting is friendly to readers, making me doubt whether this paper is suitable for publishing. (3) I still do not understand what the term \\\"community\\\" refers to in Section 4.3. After reading the revision and rebuttal, I cannot fully understand what kind of analysis is used here. Again, there are no details on this.\"}", "{\"title\": \"Response to Reviewer Khx9\", \"comment\": [\"Thank you for your detailed feedback and for increasing the score based on our clarifications. We apologize if our previous message came across as dismissive of your thorough review - that wasn't our intent. We also do appreciate your engaging; as we're sure you're aware, many ICLR reviewers aren't engaging.\", \"However, we feel we must respectfully address several points from your initial review that may have contributed to an unnecessarily low initial score:\", \"Your review suggested we were aggregating results across different numbers of shots, which is incorrect.\", \"All of our evaluations are standard and exactly follow prior work. Your objections about lack of clarity for how many shots were used, how prompts were formatted, etc. does not seem well founded.\", \"You identified a lack of an \\\"Experiment Setup\\\" section as a serious shortcoming. While we agree that our manuscript can improve with additional clarification and we added Appendix A, our analyses are 3 simple analyses of two matrices (the human evaluation scores and the NLP benchmark scores): (i) correlations, (ii) singular value decomposition and (iii) linear regressions. Such simple analyses should not require significant explanation.\", \"You questioned our access to Llama-2-34B as if some nefarious plot was afoot. Rather, we had explicit permission from Meta to use this model. This access actually strengthens our paper by providing evaluation data on a model not widely available to the community.\", \"Moreover, your initial score of \\\"1\\\" is, in general, extremely harsh. Independent of our paper, we feel that \\\"1\\\"s should be given out extremely rarely and only for work that is exceptionally and egregiously inadequate.\", \"While the ICLR manuscript resubmission deadline has passed, we want to address your remaining concerns:\", \"Regarding the small model set: While four models may seem limited, they provide a controlled experiment across model scale while holding architecture and training constant. This allows us to isolate how performance changes with scale.\", \"Figure readability: We made a serious effort to (i) improve our figures, (ii) add additional figures and (iii) remove unhelpful figures. Rather than criticizing the figures, you could be significantly more helpful if you tell us how to improve our figures.\", \"Community detection methodology: We will add an appendix section to Appendix A detailing this methodology. To explain here, community detection is a standard approach in network science. In our context, we have a bipartite graph, where the two node sets are human evaluations and NLP benchmarks, and the edge weights are the correlations between nodes. To identify such communities, we turn to the most basic linear algebra primitive (i.e. SVD) and study the different singular modes.\", \"Would you be open to providing additional feedback about how to improve further? Your critique has helped us identify important ways to make this work more rigorous and accessible, while still maintaining its core contributions.\"]}", "{\"summary\": \"This paper studies the relationship between NLP benchmarks and human evaluation results and aims to understand what roles NLP benchmarks should play in the era of LLM. They conduct human evaluations on four Llama 2 chat models and calculate the correlation between human evaluation results NLP benchmarks, spanning from open-domain QA, MMLU, and safety/adversarial datasets. They find that most NLP benchmarks correlate well with human evaluation results, and it is possible to predict human evaluation results based on scores on NLP benchmarks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"This paper studies a very important problem: whether scores on NLP benchmarks correlate with human evaluation results. This can potentially guide researchers to construct better benchmarks\", \"This paper studies the possibility of using NLP benchmarks to predict human evaluation results. Considering the efforts of human evaluation, the problem studied in this paper can help us develop LLMs faster.\"], \"weaknesses\": [\"The experiment parts are highly unclear and hard to comprehend. It is unclear how the correlations are calculated between human evaluation and NLP benchmark scores. There is even no **Experiment Setup** section in this paper, and the part that most looks like the experiment setting is the first seven lines of Section 3. After repeatedly reading those lines, I still cannot understand how the correlations are calculated. Precisely,\", \"How do you aggregate the scores of different shots?\", \"Why do you aggregate the results of different shots?\", \"What is the number of shots?\", \"How is the prompt formatted?\", \"How are the demonstrations in the few-shot selected?\", \"Where does the number *150* on Line 148 (page 3) come from?\", \"How is the human evaluation conducted? How many samples are there in the single-turn and multi-turn dialogues? How are the topics selected? What is the distribution of the data?\", \"If the paper only uses four models, is the correlation coefficient calculated using only the benchmark scores of 4 models and the human evaluation results of the models? This means we are only calculating the correlation coefficient between two sets for numbers with only four elements in each set.\", \"There are only four models used in this paper: the four chat models in Llama-2 with different numbers of parameters. The abilities of those models are very distinct, so it is easier for human evaluators or NLP benchmarks to distinguish the strengths of these models. A more challenging and realistic scenario is to consider more LLMs whose abilities are more diverse.\", \"The figures in the paper are terribly and poorly formatted. Those figures do not seem like they are designed to be read. The font sizes in the figures are too small to read and clustered together. I need to zoom in to 400% on my computer to see the words.\", \"Section 3.3 is highly unclear, without explaining what the *communities* this section is discussing and with no experiment settings that allow the readers to understand what is happening now.\", \"Considering that the experiment setting is highly unclear and the results are poorly presented, it is impossible to evaluate the contribution of this work. The paper requires major refinement. However, the paper studies an important problem, and I encourage the authors to keep working on this topic.\"], \"questions\": [\"Q1. How do the authors conduct the experiment using the Llama-2-30b model? In fact, there is no 30b model in the LLama2 series, and I assume the authors are referring to the Llama-2-34b model. However, even Llama-2-34b-chat (or the base model) is not officially released, so I wonder how this paper conduct experiments using Llama-2-34b-chat.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer evg9\", \"comment\": [\"Thank you for your thorough and constructive feedback. We have made substantial improvements to address your concerns about clarity, methodology, and presentation:\", \"We significantly expanded Section 3 to provide a clearer and more detailed explanation of our analyses\", \"To ensure better transparency, we added Appendix A, which provides comprehensive documentation of our experimental methodology and data processing\", \"We also added Appendix B, which presents baseline statistical analyses of the human evaluation data that we collected\", \"Based on your valuable feedback about visualization clarity, we developed three new figures (3, 5, and 7) that more effectively communicate our results\", \"We also removed potentially confusing visualizations and enhanced (but forgot to update) Figure 6's legibility to better support our key findings\", \"While we understand your suggestion about additional models, we have neither time nor money to change this, and we believe our enhanced methodology and clearer presentation merit your reassessment of our paper.\"]}", "{\"metareview\": \"This paper studies the relationships between human evaluations and NLP benchmarks. They find that most NLP benchmarks are broadly highly correlated with human evaluations, and they also fit models to predict a language model\\u2019s human evaluation scores from academic evaluation scores. The problem studied in this paper is interesting and important. However, the paper is not well written, and the reviewers raised many writing issues of the paper. The study only used four LLMs, which is not enough. In summary, this paper needs to be greatly improved and more LLMs need to be added to the study.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers who gave negative scores were not satisfied with the rebuttal and did not change their scores.\"}", "{\"title\": \"Responses to Authors' Responses (1/2)\", \"comment\": \"I want to thank the authors again for their responses. I understand the concerns they have raised and appreciate the effort they have put into addressing the review. I also understand the frustration that can accompany receiving a low score. I welcome this opportunity to discuss the work further, as it allows me to clarify potential misunderstandings and ensure that readers have a complete perspective on how the paper was evaluated.\\n\\nHowever, I must address what appears to be a suggestion that the review was conducted without properly reading the paper. This suggestion is entirely unfounded, as I have thoroughly analyzed the submission. While the authors may not be making such a direct accusation, their responses create a context where this perception might arise. To avoid any confusion, I believe that additional clarification is necessary and will benefit the overall understanding of the evaluation process. Constructive discussions like this are valuable, and I remain committed to engaging in them professionally.\\n\\n> About whether the paper is aggregating the results of different shot. (Which it did not)\\n\\nWhy do I have such a guess (not misunderstanding)? This is because, in the original version, the descriptions of the number of shots in the main content are simply \\\"*we used standard evaluation processes for all academic benchmarks including prompt formatting, metrics, 0-shot/few-shot, etc.*\\\" without further explanations. How can a reader know this means *\\\"we use different numbers of shots and prompts for each dataset, following the previous work xxx\\\"*? The original sentence seems to say each dataset uses multiple shots, so I can only guess those results are aggregated. Yes, my guess is incorrect, but **this is because the paper did not say anything about it**. This is clear evidence of insufficient experiment details. Next, just saying standard evaluation in NLP does not reveal any details for reproduction, not to mention that the original paper did not explicitly say, \\\"We follow all the experiment settings in Llama-2\\\". (If the paper did say this in the original version, please tell me and I am willing to apologize for that.) Last, the original version of the paper did not even have a dedicated Appendix section for the experiment setting and was only added on the last day of the author response period based on the reviewer's request. Based on all of these, this makes me believe that **the experiments in the initial version of the paper are not sound at all**. Now that the appendix sections have been added, I increase the score to 3 for this reason. \\n\\n> All of our evaluations are standard and exactly follow prior work. Your objections about lack of clarity for how many shots were used, how prompts were formatted, etc. does not seem well founded.\\n\\nThe paper did not even **cite** the prior work they are mentioning, and the lack of clarity is, of course, well founded. Moreover, I do not believe there exists such as standard that says *dataset A uses $K$ shot and use prompt format xxx*. The author's statement on the existence of such a standard is highly questionable and not well-founded.\\n\\n> You identified a lack of an \\\"Experiment Setup\\\" section as a serious shortcoming. While we agree that our manuscript can improve with additional clarification and we added Appendix A, our analyses are 3 simple analyses of two matrices (the human evaluation scores and the NLP benchmark scores): (i) correlations, (ii) singular value decomposition and (iii) linear regressions. Such simple analyses should not require significant explanation.\", \"i_want_to_stress_it_again\": \"if the paper does not have an experiment setup section, no one can reproduce the results with high precision. No matter how simple they may seem, they are required. Reproducibility is the core of our discipline, and as a researcher working on evaluation, I know how painful it is to reproduce a paper without proper experiment setting details.\\n\\n> You questioned our access to Llama-2-34B as if some nefarious plot was afoot. Rather, we had explicit permission from Meta to use this model. This access actually strengthens our paper by providing evaluation data on a model not widely available to the community.\\n\\n**This is a strong accusation to say I am suggesting some nefarious plot was afoot**. My original question is *However, even Llama-2-34b-chat (or the base model) is not officially released, so I wonder how this paper conducts experiments using Llama-2-34b-chat.* How can one infer that I am questioning the integrity of how the authors access the model? I was simply questioning where and how one can get such a model, or did the authors train such a model by themselves? I think this question does not have such negative implications. Moreover, in the original paper, the number of parameters of the model is even wrong, making me more curious about how the model is obtained.\"}", "{\"summary\": \"The paper initially explores the correlation between NLP benchmarks and human evaluation. With the advent of increasingly capable LLMs, human evaluations have become a steady and major alternative choice to evaluate the efficacy, performance and capabilities of LLMs. An important question that generally arises with the choice is whether NLP benchmarks are useless since human evaluations are costly and time consuming and are not always a gold standard. Where do NLP benchmarks fall? This paper explores this question and also explores the possibility of predicting human evaluations from NLP benchmarks.\", \"two_key_questions_are_asked\": \"- To what extent are human evaluations and NLP benchmarks correlated?\\n- How well can benchmarks predict expensive and time-intensive human evaluations?\\n\\nThe researchers use all the Llama chat-2 models (7,13,30 and 70B parameters) to establish this, which were trained on 2T tokens and fine-tuned using SFT and RLHF. Human evaluations are collected by evaluating the Llama2 chat models pairwise against ChatGPT 3.5 on dataset a of single-turn and multi-turn prompts, where responses are sampled from each model. 3 Human annotators independently provide a pairwise comparison on the Likert scale (1 to 7, where 1 means chat llama preferred and 7 means chatgpt 3.5 preferred). uThey end up doing a large-scale study spanning factual questions, language assistance, writing, procedural questions, reasoning and many more. The Chat Llama 2 models are evaluated on many popular NLP benchmarks right from AGI Eval, Ai2 Reasoning Challenge, Big Bench Hard, Boolq, commonseqa, GSM8k, MMLU, MATH, QuAC, PiQA and many more. Standard evaluation processes are used. \\n\\nThe findings revealed that NLP benchmarks are broadly highly correlated with human evaluations, with certain benchmarks showing particularly strong correlations. The most predictive benchmarks included specific subsets of MMLU (covering topics like nutrition, human aging, and sociology), portions of BIG Bench Hard, HellaSwag, ARC, RACE, PIQA, Natural Questions, QuAC, and CommonSenseQA. However, some benchmarks showed weaker correlations, including ETHOS, Kth Sentence, most of Inverse Scaling, OpenBookQA, COPA, SciBench, and SIQA.\\n\\nUsing overparameterized linear regression, the researchers successfully demonstrated that NLP benchmark scores could predict human evaluation scores with reasonable accuracy. Despite the small sample size of only four models, leave-one-out cross-validation showed promising results, suggesting that faster and cheaper NLP benchmarks might effectively predict slower, more expensive human evaluations in many cases.\\n\\nThe authors note several limitations, including the small sample size, the assumption of linearity in their predictive models, and potential limits to generalizability across different model families thus rounding up the study and paving the way for future work as well.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1] The question at the center of the paper -- \\\"Correlation between NLP benchmarks and Human Evaluations,\\\" is an important central question to NLP evaluation in general. Human Evaluations are considered (somewhat so) the gold standard of evaluation but are extremely time-consuming and expensive to run; as models get more capable the human evaluations also get even more costlier because now we require experts to evaluate vs requiring less advanced folks earlier, but we can reliably construct more difficult benchmarks for models, so if these two things are correlated, perhaps lesser focus can be placed on human evaluations.\\n\\n2] Predicting Human Evaluations is a difficult task, and LLMs as judges are being increasingly explored as an alternative to human evaluations. The method in the paper also showcases some important insights into this process.\", \"weaknesses\": \"1] The small sample size brings into question the generalizability of these insights and results.\\n\\n2] Only uses GPT-3.5 as the comparative model, no insight is provided into why this is the case? And also lacks any discussion of whether chatgpt 3.5 is a reasonable choice of a baseline. \\n\\n3] Perhaps a granular analysis of what makes a benchmark more correlated? Is there something common in the correlated benchmarks? This would also pave the way to designing and determining better benchmarks.\", \"questions\": \"1) Why chatgpt 3.5? Could you justify the choice of this model? Why was chatgpt 3.5 the model chosen for comparison, is it a reasonable choice for a baseline?\\n\\n2) Could you generally talk about the distribution of the Likert scale that you got from the pairwise evals? Was there anything at all in which chatgpt was substantially better and generally chosen? (Assumption here that I suppose llama-2 would be usually better than chatgpt 3.5 in all cases)\\n\\n3) if these outputs were obtained from Chatgpt 3.5, which API was it received from, and what was the exact cutoff (e.g., ChatGPT-3.5-0604, etc.)?\\n\\n4) Pairwise evals ultimately show revealed preferences and model choice between two outputs. Do you think this translates to human evaluation directly on model outputs (not comparisons) on NLP parameters like coherence, semantic relevance, factual relevance, etc.? Could you comment on the choice of pairwise evals?\\n\\n5) Just a general question about related work: is there no related work? (while this correlation aspect might not have been explicitly studied), Studies have considered which of them is better in MT, summarization, and other NLP areas. Can you provide a more comprehensive overview of related work, including studies that have compared human evaluations and benchmarks in specific NLP tasks like MT/Summarization, and contextualize this work in the broader field?\\n\\n6) Could you conduct a detailed analysis of features/characteristics shared by highly correlated benchmarks? I think that would help a lot in designing benchmarks in the future.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer evg9 (Part 1)\", \"comment\": \"Thank you for your insightful review! We greatly appreciate you taking the time to thoroughly assess our work. We're glad you found the research question important and meaningful, and that you recognize the value in the generality and scale of both our NLP and human evaluations.\", \"to_address_the_concerns_you_raised\": \"> This study uses only four LLMs, which is too few\\n\\n> More LLMs should be covered in this study. I understand the computational cost during inference and the cost in human evaluation, but four LLMs are definitely too few to support subsequent experiments.\\n\\nWe fully agree that testing more models would provide greater insight and robustness to the findings. Due to the high cost of collecting human evaluations (approximately $250k USD per model), we were constrained in the number of models we could include in this study. Please note that we did make extensive efforts to expand the models tested, but ran into data collection errors. Ultimately, we chose to focus on the 4 models from the Llama 2 family to maintain consistency in model architecture while still spanning a wide range of model scales. Your point is well-taken and we will emphasize this as a key limitation in our discussion section.\\n\\n> The correlations between automated NLP benchmarks and human evaluation are calculated merely from two four-dimension vectors, which is unreliable\\n\\nThis is a fair concern given the small sample size. To increase the reliability of the correlations, what would you advise?\\n\\nTo suggest one possibility, we could add confidence intervals estimated via bootstrapping. Specifically, we could resample the 4 models with replacement many times, compute the correlation on each sample, and report the 2.5 to 97.5 percentile range as 95% confidence intervals. We will also compute p-values via a permutation test to quantify the probability of observing correlations as extreme as we did under a null hypothesis of no correlation.\\n\\nWould these additional analyses be sufficient to address the sample size limitation? We welcome any other suggestions you may have.\\n\\n> Insufficient experiments for predicting human evaluation from automated NLP benchmarks, despite cross-validation conducted in the paper\\n\\nSimilar to our response above, (1) why do you find leave-one-out cross validation insufficient, and (2) what analysis (or analyses) would you recommend?\\n\\n> The paper lacks key details, including but not limited to how the prompt set used in human evaluation is obtained, the human evaluation process and its reliability (e.g. inter-annotator agreements), details of how the correlation is calculated (what is ~150 evaluation process?), the settings for linear regression.\\n\\nWe are adding a detailed Experimental Methodology to the Appendix to describe our methodology in meticulous detail. As an overarching comment, all of our NLP benchmark scores are computed in the \\u201cdefault\\u201d or \\u201cstandard\\u201d manner, e.g., as one would find in Meta\\u2019s Llama 2 paper, which is the work that our paper built on top of. Please see the new Appendix A.1; we will add more information about human evaluations today or tomorrow.\\n\\nAn \\u201cevaluation process\\u201d is the terminology we use to describe whatever additional information is necessary to describe how scores are computed on a dataset. An evaluation process is the metric, whether samples were generated (and if so, how many), whether 0-shot or few-shot prompting was used (and if so, how many exemplars), whether chain-of-thought prompting was used, etc. If you would advise different terminology to help us encapsulate all of these details, please let us know.\"}", "{\"title\": \"Response to Reviewer evg9 (Part 2)\", \"comment\": \"> I do need more details of the human evaluation in your study. What makes me most confused is the selection of the prompts. Why don't you use the same question sets as those of automated NLP benchmarks?\\n\\nYou raise an interesting point about directly comparing human and automated evaluations on the same set of prompts. In this work, our aim was specifically to identify which NLP benchmark scores are predictive of human preferences on open-ended prompts representative of real-world chat model usage. We chose this approach to maximize the ecological validity and generalizability of the findings to practical applications. For a concrete example, we may want our chat LMs to excel at roleplaying as different characters or at building novel fantastical worlds; the question we want to know the answer to is: which NLP benchmarks provide useful signals on whether models are improving at such tasks? \\n\\nHowever, your suggested approach of comparing evaluations on the same prompts would provide valuable insight into the agreement between human and automated scores in a more controlled setting. We will note this as an important direction for future work in our discussion section. Thank you for the thought-provoking suggestion!\\n\\n> The presentation of the paper could be improved. For instance, the font sizes in Fig 3, the upper part of Fig 4, and Fig 6 are too small, making it hard to read.\\n\\nThank you for alerting us to these readability issues in our figures. We are currently revising the identified figures to increase font sizes and ensure all text is clearly legible. The updated figures will be included in our next revision.\\n\\n**Thank you for your patience while we work to integrate your feedback to improve the manuscript.**\"}", "{\"title\": \"Polite Nudge to Reviewer Khx9 to Please Respond\", \"comment\": \"Dear Reviewer Khx9,\\n\\nWe significantly revamped our manuscript and wrote a lengthy response to your review. We strongly believe your score of 1 is unjustified. Could we ask you to please respond?\\n\\nThank you!\"}", "{\"title\": \"Follow Up to Reviewer sJ5t\", \"comment\": \"Dear Reviewer sJ5t,\\n\\nTo circle back to your review,\\n\\n> Just a general question about related work: is there no related work?\\n\\nAs promised, we added a Related Work section in the revised manuscript. We are continuing to work on other improvements and will have an updated manuscript in a day.\\n\\n> The small sample size brings into question the generalizability of these insights and results.\\n\\nTo reiterate, we do agree. Our sample size is small because collecting human evaluations for a single model costs ~$250k USD and we ran into errors trying to collect human evaluations of other models. We are happy to highlight this limitation, but we sadly have no way of fixing it.\"}", "{\"title\": \"Response to Reviewer Khx9 (Part 2)\", \"comment\": \"> The figures in the paper are terribly and poorly formatted. Those figures do not seem like they are designed to be read. The font sizes in the figures are too small to read and clustered together. I need to zoom in to 400% on my computer to see the words.\\n\\nIt is difficult to visualize the data because of how many signals (i.e. human evaluations and NLP benchmarks) we are plotting. We are working to improve our visualizations currently.\\n\\n> Section 3.3 is highly unclear, without explaining what the communities this section is discussing and with no experiment settings that allow the readers to understand what is happening now.\\n\\nThe experimental setting is the same as before - nothing has changed. We are simply plotting and discussing the data.\\n\\nRegarding what \\u201ccommunity\\u201d means, the term \\u201ccommunity\\u201d is an (informal) reference to community detection in graphs (https://en.wikipedia.org/wiki/Community_structure). We will be updating this section in 1-2 days.\\n\\n> Q1. How do the authors conduct the experiment using the Llama-2-30b model? In fact, there is no 30b model in the LLama2 series, and I assume the authors are referring to the Llama-2-34b model.\\n\\nYour assumption is correct. We will rename Llama-2-30B to Llama-2-34B. \\n\\n> However, even Llama-2-34b-chat (or the base model) is not officially released, so I wonder how this paper conduct experiments using Llama-2-34b-chat.\\n\\nLlama-2-34B has indeed not been publicly released. We asked for and received permission from Meta to use the model for the purposes of our study.\\n\\n**Thank you for your patience while we work to integrate your feedback to improve the manuscript**\"}" ] }
51WraMid8K
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
[ "Yan Scholten", "Stephan Günnemann", "Leo Schwinn" ]
Comprehensive evaluation of Large Language Models (LLMs) is an open research problem. Existing evaluations rely on deterministic point estimates generated via greedy decoding. However, we find that deterministic evaluations fail to capture the whole output distribution of a model, yielding inaccurate estimations of model capabilities. This is particularly problematic in critical contexts such as unlearning and alignment, where precise model evaluations are crucial. To remedy this, we introduce the first formal probabilistic evaluation framework for LLMs. Namely, we propose novel metrics with high probability guarantees concerning the output distribution of a model. Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment. Our experimental analysis reveals that deterministic evaluations falsely indicate successful unlearning and alignment, whereas our probabilistic evaluations better capture model capabilities. We show how to overcome challenges associated with probabilistic outputs in a case study on unlearning by introducing (1) a novel loss based on entropy optimization, and (2) adaptive temperature scaling. We demonstrate that our approach significantly enhances unlearning in probabilistic settings on recent benchmarks. Overall, our proposed shift from point estimates to probabilistic evaluations of output distributions represents an important step toward comprehensive evaluations of LLMs.
[ "Machine Unlearning", "Alignment", "Large Language Models" ]
Accept (Oral)
https://openreview.net/pdf?id=51WraMid8K
https://openreview.net/forum?id=51WraMid8K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vdeRv6z83g", "uf83eTKHmf", "pZKf8Md7jv", "nqVGpMpoVW", "c0kNDLz1Dg", "aXTIrbtJum", "ZGhzSsA5dq", "X1JoccNw68", "TmPDw9xPZ4", "Ns1jlxA5uv", "LF902jWt1z", "IaxeptKTLS", "CDBfNumU6H", "3ybHIMNMC3", "2FGGTnHcHq", "02CjSuqtuK" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review" ], "note_created": [ 1732041319613, 1732725037594, 1732041744756, 1730362969535, 1734530797703, 1732331170394, 1732042291577, 1732042194976, 1732366052462, 1729934232721, 1732369322886, 1730659540909, 1732613788228, 1737523773806, 1732041895552, 1730681391936 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_v1rM" ], [ "ICLR.cc/2025/Conference/Submission6509/Area_Chair_GKXw" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_tRL6" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_tRL6" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_tRL6" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_B7r6" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_v1rM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6509/Authors" ], [ "ICLR.cc/2025/Conference/Submission6509/Reviewer_MsxG" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer MsxG\", \"comment\": \"Thank you for your review!\\n\\n**Regarding alignment.** In practice, our metrics can be applied independent of the downstream task. We chose to emphasize unlearning in the main part of the paper because it is an area where probabilistic leakage can have particularly critical consequences, such as the inadvertent exposure of sensitive information. However, we agree that extending the discussion to alignment enhances the broader applicability of our work.\\n\\nIn response to your comment, we provide additional alignment experiments in Appendix B to demonstrate how our probabilistic evaluation framework can be applied to alignment tasks. Specifically, we show that our proposed probabilistic metrics can seamlessly generalize to tasks beyond unlearning. These metrics require only a continuous or binary evaluation measure derived from sampling model outputs, which can then be integrated into our formulas to estimate bounds efficiently. By applying this approach to alignment, we estimate the risk of LLMs generating harmful responses, highlighting the adaptability and efficiency of our framework in alignment contexts.\\n\\nWe hope the additional experiments and explanations in Appendix B clarify how our approach can be applied to alignment tasks and strengthen the contribution of the paper. Thank you for pointing out this opportunity for improvement!\\n\\n**Regarding the definition of $\\\\alpha$ (Q1).** Thank you for pointing out potential to clarify notation. While we introduce the significance level $\\\\alpha$ in the main text at the beginning of Section 4.1, we agree that it is a critical component for understanding the paper. In response to your comment, we added further clarifications to the manuscript (line 181).\\n\\n**Regarding the impact of $\\\\lambda_r$ on metrics other than diversity (Q2).** Thanks for pointing this out. We briefly state in the caption of Figure 5 and subsection 6.3 (c) that entropy regularization has no impact on the utility of the model. We will make it more prominent in the final paper that $\\\\lambda_r$ did not have any considerably effect on model utility or output diversity.\\n\\n**Regarding the proposed entropy optimization objective (Q3).** For entropy optimization we simply compute the entropy of the softmax output of the model. This loss computation has no measurable effect on the speed of the training in our experiments.\\n\\nWe hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.\"}", "{\"title\": \"Response to Reviewer B7r6\", \"comment\": \"Dear reviewer, thank you again for your review. Please let us know if you have any additional comments or questions. Thank you.\"}", "{\"title\": \"Response to Reviewer B7r6\", \"comment\": \"Thank you for your review!\\n\\n**Regarding notation (Bounds, L118, L230).** We agree with you and have revised the notation in response to your feedback. Specifically, we renamed the bounds to clarify their intended meanings: binary leakage bound $M_{bin}$, general leakage bound $M_{gen}$, expectation bound $M_\\\\mu$, and standard deviation bound $M_\\\\sigma$. We also followed your suggestion to use the Kleene star for denoting sequences of arbitrary length. \\n\\n**Regarding the binary case (L177).** Your understanding is correct. The beta distribution comes from the Clopper-Pearson confidence bound, a common method for computing binomial confidence intervals. In response to your comment, we clarified this explanation in the manuscript.\\n\\n**Regarding $\\\\epsilon$ and the DKW-inequality (L197).** Thank you for pointing out potential to make our statements more self-contained in the main paper. In response to your comment, we clarified the statements in Section 4.1 by explicitly stating and citing the DKW-inequality, showing that $\\\\epsilon$ directly stems from it. \\n\\n**Regarding Proposition 3 (L211).** You are right, the bounds stem from lower and upper bounding the integral $E[X] = 1-\\\\int F(x)$ $dx$ with Riemann sums. Please note that we elaborate on this relationship in a self-contained proof in Appendix D. In response to your comment, we have added pointers to proofs directly in the main text. \\n\\n**Regarding $\\\\eta_i$ (L225).** The variable $\\\\eta_i$ bounds the term $(x-E[X])^2$ in the variance integral. Please note that we elaborate on this in a detailed, self-contained proof in Appendix D. In response to your feedback, we improved clarity in the main text and added pointers to the proof in the Appendix.\\n\\n**Regarding the ED score (L246).** We agree that generalizing the ED score would be beneficial. We revised the section in the manuscript to introduce a hyperparameter for the ED score, allowing it to be selected based on the risk level as you proposed. \\n\\n**Typo (L150).** We also fixed the typo in line 150, thank you for pointing this out.\\n\\nPlease note that we provide detailed, self-contained proofs for all statements in Appendix D, offering further insights for interested readers. For readers less familiar with statistical machine learning, we also provide user-friendly and ready-to-use code uploaded as supplemental material.\\n\\nWe hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.\"}", "{\"summary\": \"This paper introduces a probabilistic perspective on LLM evaluation which shifts from single point estimates towards evaluating entire output distributions offers significant potential for the field of unlearning and proposes a novel framework to directly assess the output distribution of a model. Besides, an unlearning loss based on entropy optimization and adaptive temperature scaling is also proposed.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Novel Perspective on LLMs Unlearning Evaluation** Existing deterministic metrics are apparently insufficient for LLM unlearning evaluation, and this paper introduces a probabilistic perspective to mitigate this issue.\\n\\n2. **Adequate Mathematical Derivation** In this paper, the authors demonstrate the rationality of their method theoretically and empirically.\", \"weaknesses\": \"1. **More Discussion on LLM Alignment Evaluation** Since the titile contains \\\"ALIGNMENT\\\", more discussion on this topic should be included in this paper.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a formal probabilistic evaluation framework for LLMs and designs new metrics with high-probability guarantees concerning the output distribution of a model. In addition, an unlearning loss based on entropy optimization and adaptive temperature scaling is also proposed. The proposed framework is novel and important. The evaluation results show the effectiveness of the proposed metrics and method. The paper is generally well written. There are some minor issues with the presentation that need to be addressed in the final version. All reviewers ultimately have a positive opinion of the paper.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers increased their scores during the rebuttal period and all reviewers ultimately have a positive opinion of the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your response which addresses some of my concerns. And thanks for the plan of code releasing.\\nIf I'm not misunderstanding, the estimation is mainly the average of N independent sampling (plus 2 times standard deviation)? I still feel that a clearer procedural description of the method would help people to understand things in an easier way.\"}", "{\"title\": \"Global response\", \"comment\": [\"We want to thank all reviewers for their valuable feedback. We changed our initial submission in response to their comments as follows:\", \"Reviewers MsxG, v1rM and tRL6: We have included additional alignment experiments in Appendix B to illustrate that our probabilistic evaluation framework generalizes to alignment tasks.\", \"Reviewer tRL6: Clarifications on the algorithmic procedure of our probabilistic evaluation framework.\", \"Reviewer B7r6: Improved notation and few clarifications.\"]}", "{\"title\": \"Response to Reviewer tRL6\", \"comment\": \"Thank you for your review!\\n\\n**Regarding our proposed metrics.** Please note that all our metrics can be calculated using a single mathematical formula. To make it easier for the community to use our metrics, we will provide our code as a GitHub repository and our framework as a pip library after acceptance. We have already attached the code used to conduct the experiments in the paper as supplemental material.\\n\\n**Regarding additional use-cases.** We added an additional evaluation on model alignment to our work. We hope the additional experiments and explanations in Appendix B clarify how our approach can be applied to tasks beyond unlearning and strengthen the contribution of the paper. Thank you for pointing out this opportunity for improvement!\\n\\n**Regarding entropy optimization (Q1).** We conducted an ablation study to investigate if entropy optimization has a negative impact on model properties. In Figure 5 (a) we show that by choosing suitable values for $\\\\lambda_r$, entropy optimization does not affect the diversity of model generations on unseen utility datasets. In Figure 5 (b) we illustrate that the confidence / entropy on the retain set (data unrelated to the unlearning task) remains stable throughout training.\\n\\n**Regarding the effect on the model's general utility (Q2).** For the experiments performed in the paper, model utility was not substantially affected by unlearning. In Figure 5 (c) we demonstrate that entropy optimization also does not negatively affect the utility of the model. We will include a table that summarizes the effect of the different unlearning techniques on model utility in the final version of the paper. \\n\\nWe hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.\"}", "{\"title\": \"Response to Reviewer tRL6\", \"comment\": \"Thank you for your feedback!\\n\\nWe agree that a clearer procedural description can be helpful for the reader. In response to your comment, we revised the manuscript and included a procedural description in Section 4 (lines 171-179). Specifically, we now explain that we first sample $n$ independent answers from the LLM and measure the information leakage $X_i$ in each answer. Then, we compute our (single-formula) metrics using the previously computed $X_1, \\\\ldots, X_n$. Please note that we introduce four distinct metrics (leakage bounds, expectation bound, standard deviation bound), each offering unique insights into the leakage distribution. Your understanding is correct, each metric is essentially based on taking an average of the samples (and additionally including an error bound on the estimation).\\n\\nWe hope this clear algorithmic description along with the uploaded ready-to-use code will help the reader's understanding and make the method easier to follow. Overall, we believe our probabilistic approach is an important contribution to the community, as also highlighted by other reviewers.\\n\\nWe hope that we could address your feedback to your satisfaction. Please let us know if you have any additional comments or questions.\"}", "{\"summary\": \"This work introduces a probablistic perspective for LLM unlearning evaluation. Instead of relying on deterministic greedy decoding in existing evaluation methods, this work takes a probablistic framework and derive metrics considering the high-probable output distributions. The proposed metric demonstrates the limitations of previous methods for their lack of identifying false unlearning. Moreover, a novel loss based on entropy optimization and adaptive temperature scaling are introduced to improve model unlearning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Designing good evaluation metrics is important for a reserach direction such as unlearning, and this work indicates a limitation of existing metric and correspondingly proposes improved metrics.\", \"The proposed metrics and methods are shown to be effective for two recent unlearning benchmarks.\"], \"weaknesses\": [\"There is a lack of algorithmic description on how the proposed metrics are calculated, without which readers who lack certain statistical machine learning knowledge or who want to implement the metrics would find it difficult to understand and apply the proposed metrics.\", \"The proposed metrics are only tested for the unlearning case, which surely is indeed a well-suited scenario. Nevertheless, it would nice if it can be extended to more use cases, such factuality, to verify the effectiveness of the metrics.\"], \"questions\": [\"For entropy optimization, I'm not sure about the intuition to minimize the entropy on Dfg. Wouldn't this lead the model to be confident on a different answer, which I think might be a strange thing to enforce.\", \"It would be interesting to how unlearning (as well as the proposed optimization methods) affects the model's general ability.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the modification, and I've adjusted my score accordingly.\"}", "{\"summary\": \"This paper proposed a set of metrics that bounds the risk of unlearning by estimating the bounds of probability of leaks, and the deviation of such random variables. Instead of computing the metric over deterministic point estimates drawn from greedy decoding of LLMs, it proposes the use of Monte Carlo methods, and then estimates these bounds by computation over the empirical distribution. Additionally, the authors proposed some mitigation methods to reduce the risk of leakage when fine-tuning a LLM, and show through experiments that such measures offer potential for reducing the risk and undesired biases in model outputs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes metrics that is defined on the output distribution rather than the point estimate. I think this is a remarkable step and should be considered by various other scenarios.\", \"The exposition on the estimation of probability of leakage and bounds of standard deviation are intuitive and sound.\", \"The set of experiments presented are convincing.\"], \"weaknesses\": [\"Notations can be improved in the exposition. For example, $M_1, \\\\cdots, M_4$ actually stands for estimates of different variables, rather than 4 different ways of estimating the same variable. See questions below for more suggestions.\", \"Some derivation are not self-contained (e.g. in Metric 2, the $\\\\epsilon = \\\\sqrt{\\\\frac{\\\\log (1/\\\\alpha)}{2n}}$ is not self-contained and is derived from prior work.\", \"Expositions tend to be a bit too formal, and lacking some intuitions and insights. See below.\"], \"questions\": [\"L118: $V^\\\\infty$: I believe that the more accepted notation for sequences of arbitrary sequences is $V^*$, where $*$ is the Kleene star.\", \"L150: \\\"extend\\\" -> \\\"extent\\\".\", \"L177: \\\"Binary case\\\": This exposition feels a bit verbose. My understanding is that you are empirically fitting a Beta distribution based on whether data is leaked through your Monte-Carlo experiments, and outputting a quantile based on a desired safety level $\\\\alpha$. Please correct me if my understanding is not correct.\", \"L197: Make this part more self-contained: especially, what is the Dvoretzky-Kiefer-Wolfowitz inequality and how does it apply here?\", \"L211: Proposition 3: This lower and upper bound is reminiscent of the Darboux integrals of $F_n$. If possible, please elaborate on the relationship of this bound estimate to an underlying integral expression. Additionally, it'd be good to reiterate that $F_n$ is the empirical CDF.\", \"L225: What does $\\\\eta_i$ bound? Please discuss.\", \"L230: $M_4$: I think it'd be better to call this something like $M_\\\\sigma$, to be clear that this is not an estimate of the probability.\", \"L246: $\\\\bar X + 2\\\\bar \\\\sigma$: why 2? I believe this is a choice based on the safety level, but it would be better to define this as a hyperparameter whose selection is based on the accepted risk level.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response which have addressed my concerns, and I've adjusted my score accordingly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Response to Reviewer v1rM\", \"comment\": \"Thank you for your review!\\n\\nIn practice, our metrics can be applied independent of the downstream task. We chose to emphasize unlearning in the main part of the paper because it is a critical area where probabilistic leakage can have particularly critical consequences, such as the inadvertent exposure of sensitive information. However, we agree that extending the discussion to alignment enhances the broader applicability of our work.\\n\\nIn response to your comment, we provide additional alignment experiments in Appendix B to demonstrate how our probabilistic evaluation framework can be applied to alignment tasks. Specifically, we show that our proposed probabilistic metrics can seamlessly generalize to tasks beyond unlearning. These metrics require only a continuous or binary evaluation measure derived from sampling model outputs, which can then be integrated into our formulas to estimate bounds efficiently. By applying this approach to alignment, we estimate the risk of LLMs generating harmful responses, highlighting the adaptability and efficiency of our framework in alignment contexts.\\n\\nWe hope the additional experiments and explanations in Appendix B clarify how our approach can be applied to alignment tasks and strengthen the contribution of the paper. Thank you for pointing out this opportunity for improvement!\\n\\nWe hope that we could address all your questions to your satisfaction. Please let us know if you have any additional comments or questions.\"}", "{\"summary\": \"In this paper authors address the problem of reliable unlearning in LLMs. First they introduce a problem, that evaluations based on deterministic point estimates (sampled texts) fail to reliably catch the risks exposed in probablistic outputs. For the case of unlearning, authors state that existing methods rely on a single generated sequence to identify if the information leakage is present or not. Which might not be enough when assessed model might still eventually produce a text with leaked information (with some probability). Therefore authors propose a set of 4 metrics aiming accurately quantify information leakage in model output distribution. Then, authors propose a novel unlearning training objective, which aims to simultaneously minimize model's output distribution entropy on a set of \\\"forget samples\\\" while retaining diversity on \\\"retain samples\\\". The loss itself is a set of additional terms which can be applied to some existing unlearning objectives. Finally, authors conduct comprehensive evaluation of unlearning with different methods using the proposed metrics.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1) Authors provide 4 carefully defined metrics, along with necessary guarantee proofs. Overall, the paper is very well composed.\\n2) Those 4 proposed metrics allow to comprehensively evaluate model unlearning using entire output distribution (potentially, right now it is limited to MC sampling on certain examples). This appears to be a novel contribution and addresses the lack of probablistic evaluation in the field of unlearning.\\n3) This approach can be potentially extended to other tasks which require reliable evaluations.\\n4) The proposed entropy optimization objective is clearly defined and is formuated as additive terms which can be applied to existing unlearning losses, which makes it easy to implement. Addressing diversity on retain samples allows to ensure that models remains useful after unlearning.\", \"weaknesses\": \"1) While paper title reads as \\\"A Probabilistic Perspective on Unlearning and **Alignment** for Large Language Models\\\", authors effectively **do not touch** the alignment in their work, leaving it for further research. Indeed, alignment is only mentioned in Introduction, Limitations and Conclusion. This paper would benefit from having at least small discussion on how the proposed metrics can be extended to other evaluation tasks.\", \"questions\": \"1) The term $\\\\alpha$, which is used extensively in formulation of metrics and overall thorough the paper, is only loosely defined in appendix. While it may be the usual practice in math-heavy papers, it does substantially confuse readers who are not so proficient. It is quite pity to read a definition or proof and find terms that simply not defined anywhere above. Consider defining $\\\\alpha$ in the main text of the paper.\\n2) How increased $\\\\lambda_r$ impacts metrics other than diversity?\\n3) How does proposed EO objective impacts training efficiency (in terms of increased latency or increased VRAM requirements)? Does it limit it's applicability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
5187wrocJq
Dice-GAN: Generative Adversarial Network with Diversity Injection and Consistency Enhancement
[ "Jing Shan", "XIAOXUAN MA", "Jiaying Wang" ]
In the field of natural language description tasks, one challenge for text-to-image modeling is to generate images that are both of high quality and diversity and maintain a high degree of semantic consistency with the textual description. Although significant progress has been made in existing research, there is still potential for improving image quality and diversity. In this study, we propose an efficient attention-based text-to-image synthesis model based on generative adversarial network named Dice-GAN. To enhance the diversity of image generation, we design a diversity injection module, which injects noise several times during the image generation process, fuses the noise with the textual information, and incorporates a self-attention mechanism to help the generator maintain global structural consistency while enhancing the diversity of the generated image. To improve the semantic consistency, we designed a consistency enhancement module, which enhances the semantic consistency of image generation by combining word vectors and a hybrid attention mechanism to achieve dynamic weight adjustment for different image regions. We conducted experiments on two widely used benchmark datasets, CUB and COCO. Dice-GAN demonstrated significant superiority in improving the fidelity and diversity of image generation compared to the existing approaches.
[ "text-to-image", "generative adversarial networks", "self-attention", "semantic consistency" ]
Reject
https://openreview.net/pdf?id=5187wrocJq
https://openreview.net/forum?id=5187wrocJq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y2Wwn9HixR", "vQXqSguApC", "uZtwFfmBDB", "u7NlgsIh7h", "r985bbsQUH", "pYdcC9avxl", "oTjA9VcFZs", "jIMPG6hNv3", "iIgMoLzumA", "ekTiblEsyb", "dU2boUbe9U", "X9LEuMQW37", "T2vq5bqC1R", "SomdNLjERY", "NdM6BixaYJ", "Hj0fd4OedO", "CIPmMs0OpY", "9koizuwtds", "6ZbRsHXWm4", "5COxWN6mCc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732198952850, 1732544074167, 1732544689836, 1732515215398, 1732544453955, 1732696000110, 1729089697473, 1732669687461, 1732344568024, 1732590289961, 1730691437911, 1732619710608, 1734426081430, 1732199672185, 1732200057159, 1737523641097, 1730109909564, 1730294175879, 1732194464235, 1732544144471 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_TDGk" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_zzQP" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_zzQP" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_66gE" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_zzQP" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_r6db" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Area_Chair_oDrJ" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_66gE" ], [ "ICLR.cc/2025/Conference/Submission4454/Reviewer_TDGk" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ], [ "ICLR.cc/2025/Conference/Submission4454/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"Dear, reviewer, thank you very much for your valuable suggestions.\\nAccording to the question you raised, our answer and modification are as follows. Please help us to see if this modification is OK.\\n\\nQ1. The manuscript lacks a detailed examination of the model's performance across varying levels of text complexity.\\n\\nWe add the analysis of the image generation results with different text complexity in the experimental section, such as the comparison of simple and complex descriptions, to show the robustness of Dice-GAN when dealing with different language inputs.\\n\\nQ2. The reviewer wants to see the experiment about computational efficiency.\\n\\nWe supplement the inference time experiments for image generation of the Dice-GAN model and compare with the contrast models to evaluate the practicality and efficiency of Dice-GAN.\\n\\nQ3. The study does not thoroughly investigate the model's capacity to handle various textual attributes, such as color, size, and object positioning. \\n\\nWe conduct a more focused evaluation of specific text attributes such as color, size, and object localization, and analyze the ability of Dice-GAN to accurately reflect these descriptive features to more fully demonstrate the adaptability of Dice-GAN.\\n\\nSpecifically, our modification of Section 4.2.3 is as follows.\\n\\n**In this case study, we analyzed the CUB and MS-COCO datasets and compared the outputs of AttnGAN, DMGAN, DF-GAN, DE-GAN, StackGAN, StyleGAN, and our proposed Dice-GAN, as shown in Figure 5. It is found that there are significant differences in the synthesis quality of these models. First, for the CUB dataset, the models have some problems in image generation under the long text of meticulous description. For example, the image in column 1 is missing body parts, the body shape is distorted in the image in column 2, the feather texture is confusing in column 3 and the incongruous body proportions in column 4. The differences in performance between models are still significant when using text with short descriptions, such as inconsistent colors in column 5, while column 6 results in greater diversity in the generated images due to the lack of specific descriptions of bird types, colors and sizes. In contrast, the Dice-GAN model demonstrates excellent image synthesis ability when processing both long and short text tasks. The model can effectively maintain the integrity and coherence of the subject in the image while generating details with natural gestures and realism, especially when generating bird images. In addition, when processing complex scene synthesis tasks from the MS-COCO dataset, Dice-GAN not only demonstrates excellent semantic consistency under long text descriptions, but can more accurately localize textual features, such as \\u201cMilkshake\\u201d in column 8 and \\u201cTrain track\\u201d in column 10, but also demonstrates excellent semantic consistency under short text descriptions, such as \\u201cMilkshake\\u201d in column 8 and \\u201cTrain track\\u201d in column 10. \\u201cIt also ensures the harmony and accuracy of image feature localization in the face of short text descriptions, as demonstrated by \\u201cbedroom\\u201d in column 11 and \\u201ckite\\u201d in column 12. This shows that the Dice-GAN model is highly specialized in generating complex scenes with high fidelity.**\\n\\nQ4. How does Dice-GAN perform under different levels of input noise? \\n\\nWe divide the ablation study into two parts to study the DI and CE modules respectively, so as to discuss the details in more depth. And we have added performance study under different levels of input noise as suggested in the ablation study.\\n\\nQ5. What measures were implemented to ensure that the DI module does not excessively degrade visual quality due to noise injection? \\n\\nWe have added the discussion of strategies used to balance noise injection and maintain visual quality.\\n\\n**To avoid the noise brings excessive randomness, we add a self-attention mechanism to the module to maintain the consistency of the global structure. In addition, the experiments also show that too many feature fusion layers will increase the computational burden and the effect is not good, so we finally choose to add two feature fusion layers.**\\n\\nQ6. Does the CE module exhibit limitations in maintaining semantic consistency for longer, more detailed text descriptions? \\n\\nWe have added discussion in the ablation experiment.\\n\\n**In the ablation experiments, the addition of either the conditional channel attention mechanism or the spatial attention mechanism alone ignored some of the information to some extent, and the consistency of the model was significantly enhanced after combining the two mechanisms in the CE module for the experiments. In addition, the experiments found that the resolution of the image features generated in the early stage of the model generation is small, the effect of adding the CE module on the consistency enhancement is not obvious, and it will increase the computation time of the model.**\"}", "{\"title\": \"Revision updated\", \"comment\": \"Dear reviewer, the latest revised version is ready for your evaluation. Please help us to assess if the changes are satisfactory. If there are additional improvements we can make, please kindly let us know. Thank you again for the valuable suggestions.\"}", "{\"title\": \"Revision updated\", \"comment\": \"Dear reviewer, thank you so much. We have prepared the latest revised version. Please help us to assess if the changes are satisfactory. If there are additional improvements we can make, please kindly let us know. Thank you again for the valuable suggestions.\"}", "{\"comment\": \"We appreciate the author's response and have increased the score. However, I believe the paper still lacks a thorough analysis of DiceGAN and comprehensive comparisons with state-of-the-art diffusion models.\"}", "{\"title\": \"Revision updated\", \"comment\": \"Dear reviewer, Thank you so much. We have prepared the latest revised version. In addition to the changes mentioned before, we also added the CLIPScore evaluation metric as you suggested in the new revision. Please help us to assess if the changes are satisfactory. If there are additional improvements we can make, please kindly let us know. Thank you again for the valuable suggestions.\"}", "{\"comment\": \"Dear reviewer, thank you for your suggestion. As far as we know, CogView and DALL-E trained their models on very large scale datasets, which are about millions of text-image pairs. \\u00a0However, we trained our model on MS-COCO dataset and CUB dataset with the size of 400,000 and 80,000 text-image pairs, which are much smaller. So we believe that CogvVew and DALL-E have already got good enough parameters. Even if we trained CogView and DALL-E on our datasets, we believe the result could only be worse but not better, as the existing weights should already be sufficiently trained and optimized.\"}", "{\"summary\": \"he paper proposes Dice-GAN, an efficient attention-based text-to-image synthesis model. To enhance image diversity, a diversity injection module is introduced, incorporating noise and a self-attention mechanism. A consistency enhancement module, combining word vectors and a hybrid attention mechanism, improves semantic consistency. Experimental results on CUB and COCO datasets demonstrate Dice-GAN's superiority in image fidelity and diversity compared to existing approaches.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Clear and well-organized presentation.\", \"Superior performance to other GAN-based methods.\"], \"weaknesses\": [\"Limited novelty: While the diversity injection module is a contribution, the core idea of adding noise is not entirely novel.\", \"Lack of comparison to diffusion models: Given the current dominance of diffusion models in text-to-image generation, a more comprehensive comparison to state-of-the-art diffusion-based methods is essential to establish Dice-GAN's significance.\", \"Insufficient discussion of other generative models: The paper could benefit from a more in-depth discussion of how other generative models, such as flow-based models and StyleGAN, could be adapted or combined with Dice-GAN to further enhance diversity and quality.\"], \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. However, it remains unclear whether a state-of-the-art T2I model trained on the same dataset as DiceGAN would outperform DiceGAN. I would recommend including such experiments as part of future work. I will keep my score unchanged.\"}", "{\"title\": \"Response\", \"comment\": \"The reviewer appreciates the author's response, although I think more experiments speak louder than words. I raise my score to 6 and encourage the author to provide more experiments.\"}", "{\"comment\": \"How did you train CogView and DALL-E for the experiment? Additionally, it would be helpful to color-code the changes (e.g., using blue) to make it easier to identify the modified part in the revised version.\"}", "{\"summary\": \"This work proposes DICE-GAN, a single-stage text-to-image GAN to produce high-quality and high-diversity images with improved semantic consistency with text condition. The paper proposes two modules: The Diversity Injection (DI) module, which adds learnable noise to the image features for increasing diversity in generated images, and the Consistency Enhancement (CE) module, which allows the model to dynamically adjust the weights of different image features according to input text conditions for improved semantic consistency and fidelity.\\n\\n--\\nThe authors have provided the ablation study for the DI module on the CUB dataset with a small improvement on the IS metric. However, it is unclear if these gains will be present when scaling to larger datasets like COCO or Imagenet.\", \"the_presented_argument_for_the_novelty_of_the_di_module_is_not_new\": \"\\\"injects noise several times during the image generation process, fuses the noise with the textual information, and incorporates a self-attention mechanism to help the generator maintain global structural consistency.\\\" Further, the reported IS score is low compared to AttnGAN and DM-GAN on the MS-COCO dataset and is missing a full-scale comparison with Imagenet.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of adding learnable noise in different training phases and correction with self-attention to improve generation diversity is novel and interesting.\\n\\n2. The authors demonstrate improved performance on the IS and FID metrics on the CUB dataset and on the FID metric on the MS-COCO dataset.\\n\\n3. The authors provide an ablation study demonstrating improvements in results by adding Diversity Injection (DI) and Consistency Enhancement (CE) modules.\", \"weaknesses\": \"1. The novelty of the work is limited. The idea of feature fusion in Eq 1 in the DI module is not novel and has been explored before[1,2,3] in the context of image generation. Further, the idea of masking features in a condition-dependant manner has limited novelty. 2. Lack of clarity in Sec 3.2 writing and Fig 4. The idea behind Conditional Channel Attention mask($M_c$) and Spatial Attention attention($M_s$) is unclear. The motivation behind generating masks from both average and max channels is also unclear. Further, quantities including $G^{c}_{max}$ and $G^{c}_{avg}$ are missing in Fig 4, making it difficult to understand figure pipeline. 3. The authors claim that Dice-GAN utilizes a single-stage model structure for improved performance but are missing comparisons with multi-stage methods, including StackGAN++[4]. 4. Missing ablation studies: - Why are two feature fusion layers are needed in the DI module? How was this hyperparameter determined? - How does learnable noise $\\\\sigma$ vary when going from lower to higher layers in the trained model? - Missing ablation on design choices in CE module on use of average and max features and conditional channel attention and spatial attention submodule. 5. The proposed method achieves a lower IS score on the MS-COCO dataset, and the authors argue that this is due to the Inception model used in IS computation being pre-trained on the ImageNet dataset. The authors should provide results on Imagenet or Imagenet subset to back their claims.\\n\\n[1] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018. 2, 5\\n[2] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 5\\n[3] Peebles, William, and Saining Xie. \\\"Scalable diffusion models with transformers.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n[4] Zhang, Han, et al. \\\"Stackgan++: Realistic image synthesis with stacked generative adversarial networks.\\\" IEEE transactions on pattern analysis and machine intelligence 41.8 (2018): 1947-1962.\", \"questions\": \"Please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer, we used the pre-trained versions of CogView and DALL-E. Given that the training process of these two models is extremely complex and requires significant computational resources, we obtained their pre-training weights from the open source community. We also include the description in the paper. According to your requirements, we have marked the modified part in blue, please download the new revision. Thank you!\"}", "{\"metareview\": \"The paper introduces Dice-GAN, a novel text-to-image generation model that incorporates a Diversity Injection (DI) module to enhance image diversity and a Consistency Enhancement (CE) module to improve semantic alignment. Experimental results on CUB and MS-COCO datasets demonstrate that Dice-GAN outperforms state-of-the-art models in visual quality and fidelity. However, the novelty of this paper is marginal (e.g., the DI module's feature fusion approach, masking features). Besides, this paper lacks sufficient comparisons with multi-stage methods and state-of-the-art diffusion models (CogView and DALL-E) on different datasets or cross datasets to show the generalization ability of the proposed method. The rebuttal did not fully address the reviewers' problems, and all the reviewers lean towards rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about the limited novelty of the Diversity Injection module, lack of comparisons to diffusion models, incomplete ablations, and unclear figures. The authors clarified design choices and partially addressed ablation and clarity issues but did not provide sufficient new evidence or comparisons. While the paper demonstrates merit in improving image diversity and semantic consistency, the unresolved novelty concerns and missing comprehensive experiments weighed heavily in the final decision.\"}", "{\"title\": \"Response\", \"comment\": \"Dear, reviewer, thank you very much for your valuable suggestions.\\n\\nAccording to the question you raised, our answer and modification are as follows. Please help us to see if this modification is OK.\\n\\nQ1. Limited novelty: While the diversity injection module is a contribution, the core idea of adding noise is not entirely novel.\\n\\nWe acknowledge that adding noise to enhance diversity is not entirely new. However, we emphasize that the innovation of the DI module is to fuse the noise injection with the text information and balance the global structure through the self-attention mechanism, to promote diversity while maintaining the image quality. We will emphasize this more explicitly in the abstract. Specifically, our modification is as follows.\\n\\n**To improve the diversity of image generation, we design a diversity injection module, which injects noise several times during the image generation process, fuses the noise with the textual information, and incorporates a self-attention mechanism to help the generator maintain global structural consistency while enhancing the diversity of the generated image.**\\n\\nQ2. Lack of comparison to diffusion models: Given the current dominance of diffusion models in text-to-image generation, a more comprehensive comparison to state-of-the-art diffusion-based methods is essential to establish Dice-GAN's significance.\\n\\nWe have supplemented the comparison experiments with recently proposed diffusion models such as CogView, DALL-E and ShiftDDPMs, and conduct a detailed analysis in the experimental results section to fully demonstrate the advantages and applicable scenarios of Dice-GAN. \\n\\nSpecifically, our modification of Section 4.2.1 is as follows.\\n\\nThe comparison results of IS and FID across different models are shown in Table 1, where the best performance is shown in bold. By comparing with the stacked architectures AttnGAN, StackGAN, StackGAN++, StyleGAN models and the single-stage architectures DM-GAN, DF-GAN, and DE-GAN models on the CUB dataset, our Dice-GAN model demonstrates significant enhancement in the IS metrics from 4.28 to 4.93, and the FID metrics from 24.17 to 15.81, demonstrating that Dice-GAN exhibits excellent performance in IS and FID metrics. Converting to the MS-COCO dataset, Dice-GAN excels in FID performance, reducing it from 34.53 to 23.31. However, Dice-GAN slightly lags behind the other methods in terms of IS metrics. This discrepancy can be attributed to an inherent limitation of the IS metric (Zhang et al., 2021): the Inception model for IS computation was pre-trained on the ImageNet dataset, which is typically characterized for a single primary object, in contrast to the combinations of multiple objects that are often found in the MS-COCO dataset. This difference may lead to bias in IS assessment. We show the object differences between the ImageNet dataset and the COCO dataset in Figure 7 of the final appendix. Notably, our approach produces superior results to the diffusion models CogView, DALL-E, and ShiftDDPMs when compared to these models. In the above study, in addition to evaluating the image synthesis quality of the model, we also examined the operational efficiency of the model, especially the speed of image generation. The results show that the Dice-GAN model not only performs superiorly in terms of image quality and semantic consistency, but also significantly improves the efficiency of image generation. Specifically, compared to other models, Dice-GAN reduces the time for image generation from an average of 23.98 seconds to 9.02 seconds, which means that its image generation speed is improved by about 62.4%. This improvement significantly enhances the efficiency of image generation, making Dice-GAN more efficient and practical in practical applications.\\n\\nQ3. Insufficient discussion of other generative models: The paper could benefit from a more in-depth discussion of how other generative models, such as flow-based models and StyleGAN, could be adapted or combined with Dice-GAN to further enhance diversity and quality.\\n\\nWe extend the discussion of stacked structure-based models and other generative models such as StyleGAN, StackGAN++, and StyleGAN in the Related work section. And we add new experiments to compared our approach with these methods.\"}", "{\"title\": \"Response\", \"comment\": \"Dear, reviewer, thank you very much for your valuable suggestions.\\n\\nBased on the question you raised, our answer and modification are as follows: Please help us to check if this modification is OK.\\n\\nQ1. A comparison with recently proposed text-to-image generation models is needed. Not only should there be an analysis of issues with GANs, but also recent Diffusion models, along with performance comparisons. Is there a specific reason you only compared with ShiftDDPMs in the case of Diffusion models? Please provide a detailed response.\\n\\nWe have supplemented the comparison experiments with recently proposed diffusion models such as CogView, DALL-E and ShiftDDPMs, and conducted a detailed analysis in the experimental results section to fully demonstrate the advantages and applicable scenarios of Dice-GAN.\\n\\nQ2. Please provide a detailed explanation of the table and figure captions.\\n\\nWe have carefully checked the table and figure titles to make sure they are clear and easy to understand and provide necessary explanations and instructions.\\n\\nQ3. Performance comparisons on diverse datasets are required. Additionally, besides IS and FID, comparisons with other performance metrics are requested (e.g., CLIP score).\\n\\nWe have added performance comparisons in Section 4.2.1. In addition to IS and FID, we supplement the performance metric LPIPS value to more comprehensively evaluate the diversity of the generated results of Dice-GAN.\\n\\nSpecifically, our modification is as follows.\\n\\n**To fully evaluate the performance of the DI module in improving image diversity, we computed the average LPIPS distance between 3K pairs of images, each generated from the same sentence. Higher LPIPS values indicate greater differences between images, thus reflecting better diversity. The results of the ablation experiments on the CUB dataset are shown in Table 2.**\\n\\n**Impact of DI: The integration of the DI module significantly improves the image generation quality of the model, with an IS value of 4.81 and a FID value of 18.37. Through the ablation experiments, we find that the added noise broadcast can increase the stochastic diversity of the generated images, and the initial stage of the training uses noise vectors of lower dimensions and the dimensions of the noise vectors are gradually increased with the training, and at the same time, to avoid the noise brings excessive randomness, we add a self-attention mechanism to the module to maintain the consistency of the global structure. In addition, the experiments also show that too many feature fusion layers will increase the computational burden and the effect is not good, so we finally choose to add two feature fusion layers to enhance the fusion effect of text information and image features while maintaining the computational efficiency. Together, these optimization measures improve the ability of the DI module to generate high-fidelity and diverse images, further proving the effectiveness and practicality of the DI module.**\\n\\n**Effect of CE: After combining the CE module, the IS value increased from 4.62 to 4.65, and the FID value decreased significantly from 19.40 to 16.24. In the ablation experiments, the addition of either the conditional channel attention mechanism or the spatial attention mechanism alone ignored some of the information to some extent, and the consistency of the model was significantly enhanced after combining the two mechanisms in the CE module for the experiments. In addition, the experiments found that the resolution of the image features generated in the early stage of the model generation is small, the effect of adding the CE module on the consistency enhancement is not obvious, and it will increase the computation time of the model. Therefore, we chose to add the CE module at the stage with a resolution of 64 \\u00d7 64, which can significantly improve the semantic consistency of the model-generated images while maintaining computational efficiency. These findings confirm the effectiveness and usefulness of the proposed CE module in improving the semantic consistency of model-generated images.**\\n\\nQ4. The examples of qualitative results are too limited.\\n\\nWe have added more examples of qualitative results to show the performance benefits of Dice-GAN more intuitively.\\n\\nQ5. There is a lack of experimental analysis demonstrating the effectiveness of the proposed model structure.\\n\\nWe have supplemented this with a more detailed experimental analysis to verify the effectiveness of the DI and CE modules and explain their impact on the model performance. We divide the ablation study into two parts, studying the DI and CE modules respectively, so as to discuss the details in more depth.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The manuscript introduces Dice-GAN which incorporates Diversity Injection and Consistency Enhancement modules to address critical challenges in generating high-quality, diverse images while maintaining semantic alignment with textual descriptions. Experimental results demonstrate that Dice-GAN outperforms state-of-the-art models on the CUB and MS-COCO datasets, underscoring its efficacy in enhancing visual quality and fidelity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of the DI and CE modules marks a significant advancement in text-to-image synthesis. The DI module, which injects noise at multiple stages of generation, and the CE module, which integrates word vectors with hybrid attention, effectively improve both image diversity and semantic consistency.\\n\\n2. This method achieves SOTA performance.\", \"weaknesses\": \"1. The manuscript lacks a detailed examination of the model's performance across varying levels of text complexity. Given that text descriptions can range from simple to highly nuanced, an analysis based on text complexity would provide stronger evidence of the model's robustness and its ability to handle diverse linguistic inputs.\\n\\n2. The reviewer wants to see the experiment about computational efficiency.\\n\\n3. The study does not thoroughly investigate the model's capacity to handle various textual attributes, such as color, size, and object positioning. A more focused evaluation of these specific attributes could offer deeper insights into the model's capability to accurately reflect detailed descriptive features and further demonstrate its adaptability.\", \"questions\": \"1. How does Dice-GAN perform under different levels of input noise? Given the pivotal role of the DI module, understanding the model's sensitivity to noise levels could provide valuable insights into balancing image diversity and visual quality effectively.\\n\\n2. What measures were implemented to ensure that the DI module does not excessively degrade visual quality due to noise injection? A detailed discussion on the strategies used to balance noise injection and maintain visual quality would be beneficial.\\n\\n3. Does the CE module exhibit limitations in maintaining semantic consistency for longer, more detailed text descriptions? An analysis of the CE module's performance with nuanced and complex descriptions would provide a clearer understanding of its efficacy in handling diverse linguistic inputs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, they propose the diversity injection and consistency enhancement module for text-to-image generation. This method contribute to produce high-quality images with increased diversity and enhanced semantic consistency based on text descriptions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Enhanced Diversity: The Diversity Injection module injects noise and text vectors multiple times, ensuring a broad range of image outputs without sacrificing structure.\\n\\n2. Improved Consistency: The Consistency Enhancement module dynamically adjusts focus on image regions, aligning visuals closely with text descriptions.\", \"weaknesses\": \"1. A comparison with recently proposed text-to-image generation models is needed. Not only should there be an analysis of issues with GANs, but also recent Diffusion models, along with performance comparisons. Is there a specific reason you only compared with ShiftDDPMs in the case of Diffusion models? Please provide a detailed response.\\n\\n2. Please provide a detailed explanation of the table and figure captions.\\n\\n3. Performance comparisons on diverse datasets are required. Additionally, besides IS and FID, comparisons with other performance metrics are requested (e.g., CLIP score).\\n\\n4. The examples of qualitative results are too limited.\\n\\n5. There is a lack of experimental analysis demonstrating the effectiveness of the proposed model structure.\", \"questions\": \"Please, see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"Dear, reviewer, thank you very much for your valuable suggestions.\\n\\nAccording to the question you raised, our answer and modification are as follows. Please help us to see if this modification is OK\\n\\nQ1. The novelty of the work is limited. \\n\\nWe acknowledge that adding noise to enhance diversity is not entirely new. However, we emphasize that the innovation of the DI module is to fuse the noise injection with the text information and balance the global structure through the self-attention mechanism, to promote diversity while maintaining the image quality. We will emphasize this more explicitly in the abstract. Specifically, our modification is as follows.\\n\\n**To improve the diversity of image generation, we design a diversity injection module, which injects noise several times during the image generation process, fuses the noise with the textual information, and incorporates a self-attention mechanism to help the generator maintain global structural consistency while enhancing the diversity of the generated image.**\\n\\nQ2. Lack of clarity in Sec 3.2 writing and Fig 4. \\n\\nWe have improved the writing of Section 3.2 and provide a clearer Figure 4 for a better understanding of how the CE module works. Specifically, our modification is as follows.\\n\\n**To enhance the consistent generation of image features and textual information, we consider improving the model from both channel and spatial perspectives. In the Consistency Enhancement (CE) module, we successfully integrate the word vector $W$ into the conditional channel attention mechanism, which is used to identify and enhance the most important feature channels in the generator to improve the quality of the generated images. By learning the importance of each channel, the model can pay more attention to the information that is crucial for image generation while suppressing irrelevant features. This is combined with a spatial attention mechanism to ensure that high-level and low-level features complement each other in generating the image, enhancing the detail and structure of the image. This integration aims to improve the visual quality throughout the image generation process. Figure 4 provides a visual representation of the integrated structure of the CE module.**\\n\\nIn the paragraph of Hybrid attention feature generation stage. We add following sentences.\\n\\n**The maximum pooling operation retains the maximum values for each channel, which represent the most salient features in the feature map, such as critical parts of the image or edge information. The average pooling operation calculates the average of all values for each channel, which reflects the overall characteristics of the feature map. Average pooling captures the global information in the feature map, including background and texture. Thus, it can preserve the background information in the feature map and help the model better understand the overall structure.**\\n\\nIn the second paragraph of Hybrid attention feature generation stage. We will add following sentence before \\u201cThe computational steps are outlined in Equation 3 and Equation 4.\\u201d\\n\\n**Channel attention weights $M_c$ is generated to consolidate information across the complete feature map. This process enhances the significance of crucial channels while diminishing the influence of less critical ones, thereby enhancing model efficiency. In $M_c$, each element signifies the weight of the respective channel in the feature map, derived by summing the average pooling weight and the maximum pooling weight of that specific channel. A greater weight value denotes increased channel importance in message conveyance.**\\n\\nIn the third paragraph, we will add following sentence after \\u201cSubsequently, a sigmoid function is applied to the output to obtain the spatial attention map Ms.\\u201d\\n\\n**$M_s$ is employed to pinpoint spatial positions within the feature map, highlighting key local regions essential for image synthesis. This approach enables the model to concentrate on intricate details and textures during image generation, leveraging contextual cues to produce more nuanced and contextually rich images by directing its focus toward distinct image regions.**\\n\\nQ3. Missing comparisons with multi\\u2212stage methods, includingStackGAN++[4].\\n\\nWe have supplemented the comparison experiments with multi-stage methods such as StackGAN, StackGAN++ and StyleGAN. \\n\\nQ4. Missing ablation studies\\n\\nWe have supplemented ablation experiments, such as analyzing the impact of the number of feature fusion layers in the DI module and different submodules in the CE module, to verify the effectiveness of the model design.\\n\\nQ5. Provide results on Imagenet or Imagenet subset to back their claims.\\n\\nWe have added a Figure 7 to show a comparison of images on the ImageNet dataset on which the Inception model has been pre-trained and on MS-COCO, where the ImageNet dataset has features for a single primary object, as opposed to the MS-COCO dataset, which is often a combination of multiple objects.\"}", "{\"title\": \"Revision updated\", \"comment\": \"Dear reviewer, the latest revised version is ready for your evaluation. Please help us to assess if the changes are satisfactory. If there are additional improvements we can make, please kindly let us know. Thank you again for the valuable suggestions.\"}" ] }
514rdneWOX
LongHalQA: Long-Context Hallucination Evaluation for MultiModal Large Language Models
[ "Han Qiu", "Jiaxing Huang", "Peng Gao", "Qin Qi", "Xiaoqin Zhang", "Ling Shao", "Shijian Lu" ]
Hallucination, a phenomenon where multimodal large language models(MLLMs) tend to generate textual responses that are plausible but unaligned with the image, has become one major hurdle in various MLLM-related applications. Several benchmarks have been created to gauge the hallucination levels of MLLMs, by either raising discriminative questions about the existence of objects or introducing LLM evaluators to score the generated text from MLLMs. However, the discriminative data largely involve simple questions that are not aligned with real-world text, while the generative data involve LLM evaluators that are computationally intensive and unstable due to their inherent randomness. We propose LongHalQA, an LLM-free hallucination benchmark that comprises 6K long and complex hallucination text. LongHalQA is featured by GPT4V-generated hallucinatory data that are well aligned with real-world scenarios, including object/image descriptions and multi-round conversations with 14/130 words and 189 words, respectively, on average. It introduces two new tasks, hallucination discrimination and hallucination completion, unifying both discriminative and generative evaluations in a single multiple-choice-question form and leading to more reliable and efficient evaluations without the need for LLM evaluators. Further, we propose an advanced pipeline that greatly facilitates the construction of future hallucination benchmarks with long and complex questions and descriptions. Extensive experiments over multiple recent MLLMs reveal various new challenges when they are handling hallucinations with long and complex textual data.
[ "hallucination benchmark", "multimodal large language model" ]
Reject
https://openreview.net/pdf?id=514rdneWOX
https://openreview.net/forum?id=514rdneWOX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywALbLtvoE", "vVnNY5ToL8", "rLbyeLxBdK", "pd1fL3w18R", "nceNSzU9O1", "nNDBBmQzu6", "mhV29TMtlP", "luxFWyN5gi", "lN4hSKwxM7", "j5QFhnsXDP", "hpSGeSTxuT", "gZQv0Lu6Vz", "gR7vsMWNIr", "gB2oBkd0rn", "fANbJtBgi9", "ayLeKJFICP", "ZlqxIwhaCQ", "Z88WqcA3ae", "ViP7GrcHNo", "TzmvlISCMU", "SRJO9PHLcr", "Hy75WjWJ8i", "Hnme96KH0o", "AbRnBQ3wTA", "7uu36YU0eC" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732180905931, 1732110399893, 1734853761623, 1730694386301, 1729287563653, 1732550653487, 1732377404451, 1732373287762, 1733089053886, 1732115965011, 1730285586949, 1732180958265, 1732119268882, 1733065310365, 1730727592625, 1737523808065, 1732412643421, 1732554955419, 1733196694221, 1732287799537, 1732110745578, 1733062690824, 1732116432646, 1732553245606, 1733064302014 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Area_Chair_Vk4J" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_pkPj" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_Xq4o" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_Nmef" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_vfgV" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_Nmef" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_vfgV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_Xq4o" ], [ "ICLR.cc/2025/Conference/Submission6985/Reviewer_Xq4o" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ], [ "ICLR.cc/2025/Conference/Submission6985/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your constructive comments and suggestions, which are exceedingly helpful in improving our paper. Our point-to-point responses to your comments are listed below.\\n\\n> 1. Experimental results in Table 8 do not suggest a strong consistency between generation accuracy and MCQ accuracy. For example, Fuyu-8b and LLaVA 1.5-7b exhibits a score difference -12.41 in MCQ while -41.0 in generation. It is necessary to include more methods into consideration, especially those proposed for tackle the hallucination of MLLMs such as LLaVA-RLHF, RLHF-V, Silkie, and POVID.\\n\\nWe would like to clarify two points regarding the performance gaps shown in Table 8. (1) First, the low performance of the Fuyu model is primarily due to it sometimes fails to follow instructions to continue the descriptions/conversation. Instead, it often generates content unrelated to the given image, leading to unsatisfied accuracy. (2) Second, in the LongHalQA setting, we treat hallucinations generated by GPT as potentially misleading visual content, and evaluate MLLMs to directly describe such challenging content. However, in free-generation scenarios, even with preceding context as guidance, MLLMs sometimes still avoid addressing these challenging content and instead describe only the simplest visual content. This tendency explains why the accuracy in free-generation settings is generally higher than the completion task accuracy in LongHalQA, as shown in Table 8.\\n\\nAdditionally, based on your suggestion, we also evaluate the performance of the hallucination-mitigated methods in both the MCQ and free completion scenarios. However, we observe significant differences in GPT responded results for the same prompts compared to the results we obtained a few months ago, perhaps due to their recent updates. So we re-evaluate some models in Table 8 and summarized the results alongside those from the hallucination-mitigation methods in the table below.\\n\\nMost hallucination-mitigation methods performed reasonably well in both the free-generation and MCQ settings. POVID and Silkie consistently achieve improvements over their baseline models, LLaVA-1.5-7B and Qwen-VL-Chat. However, while these methods reduce hallucination, they also affect the models' output behaviors and instruction-following abilities. For instance, POVID and RLHF-V tend to produce outputs significantly shorter than those of other models. LLaVA-RLHF sometimes fail to continue the descriptions / conversation based on prompts, instead generating an entire new image descriptions, which led to lower Free Generation Accuracy. In contrast, the original models in Table 8 produced outputs with more consistent lengths. As a result, directly comparing these models in the free-generation setting might be less fair. Our MCQ-based hallucination completion task, however, avoids the issue of output length differences by testing the models\\u2019 tendency to generate hallucinations under the same challenging scenarios, enabling fairer evaluations.\", \"table1\": \"Comparison of Multi-Choice and Free-Generation settings on Hallucination Completion for hallucination-mitigation methods.\\n| Accuracy | Number of Generated Words | Free Generation Accuracy | MCQ Accuracy |\\n|:---------------|:--------------------------:|:--------------------------:|:-------------:|\\n| LLaVA 1.5-7B | 25.68 | 73.5 | 36.08 |\\n| LLaVA 1.6-7B | 38.74 | 84 | 43.40 |\\n| LLaVA 1.5-13B | 32.80 | 74 | 37.58 |\\n| Qwen-VL-Chat | 32.00 | 70 | 36.57 |\\n|----------------|--------------------------|--------------------------|-------------|\\n| POVID | 14.38 | 78.75 | 38.55 |\\n| RLHF-V | 8.73 | 74.5 | 31.50 |\\n| LLaVA-RLHF | 147.79 | 56.75 | 40.60 |\\n| Silkie | 37.27 | 74.25 | 38.90 |\"}", "{\"comment\": \"We sincerely thank you for your valuable comments, which are exceedingly helpful in improving our paper. Our responses to each comment are as follows.\\n\\n### **Reply to Logic Behind the Benchmark Creation**\\n\\n>1. Since all LongHalQA data is generated by GPT-4V, isn\\u2019t Model X limited to analysing GPT-4V\\u2019s specific hallucinations rather than its own? \\n>\\n>2. Could Model X be missing its unique hallucinations because it doesn\\u2019t generate its own descriptions or conversations in LongHalQA?\\n\\nAs one of the most advanced MLLMs, the GPT-4V-generated hallucinations can well represent visually misleading content in images. Therefore, our motivation is to gauge the hallucination levels of MLLMs by evaluating their capability to directly describe such challenging content. As shown in Table 5, MLLMs indeed struggle to distinguish between hallucinations and correct options when completing these challenging contents. The poor accuracy (mostly below 50\\\\%) indicates that these hallucinations are not specific to GPT-4V but rather a common problem faced by most MLLMs. We agree that incorporating more models would provide a more comprehensive view of MLLM hallucination. We plan to employ other powerful models beyond GPT, incorporating more challenging hallucination content generated by various MLLMs to further enhance the diversity of the LongHalQA.\\n\\nWe would highlight that the completion task in LongHalQA covers 12 different types of hallucinations and includes 2,139 long-context samples, with completion options featuring three different possible hallucinations. This diverse set of queries is able to capture most types of potential hallucinations the tested model X might generate.\\n\\n>3. Wouldn\\u2019t a model evaluation approach where Model X generates its own text reveal more relevant hallucinations, as done in Kaul et al.[1] and Jiang et al.[2]?\\n\\nWe believe that our LongHalQA and existing generative evaluation benchmarks each have complementary strengths. \\n\\nExisting generative hallucination benchmarks, such as those by Kaul et al.[1] and Jiang et al.[2], provide a straightforward evaluation of MLLMs generated content. However, these benchmarks have two clear constraints: 1) reliance on external MLLM evaluators; 2) limited scope of evaluated hallucinations. For example, Kaul et al.[1] use the FLAN-T5 model and can only detect simple object-existing hallucinations. Jiang et al.[2] build 2M data for fine-tuning the LLaVA-1.5-13B model as an evaluator for hallucinations related to objects, relationships, attributes, and events. When other types of hallucinations or data from other domains are introduced, the LLM evaluator must be re-trained, restricting the benchmark's flexibility and applicability. Additionally, benchmarks based on MLLM evaluators may suffer from the evaluators' randomness, as well as low efficiency for both generating and evaluating long descriptions.\\n\\nAs a comparison, by presenting potential hallucination content as completion options, LongHalQA directly evaluates the hallucination levels of MLLMs on these challenging contents, leading to a more challenging assessment while covering more diverse types of hallucinations. Additionally, the MCQ format simplifies the testing process, making it easier to obtain definitive evaluation results. Further, LongHalQA provides detailed assessments of complex hallucinations in long-context scenarios, excelling in efficiency, scalability, and diversity. We believe LongHalQA can complement existing benchmarks in evaluating long, complex hallucinations of MLLMs.\\n\\n\\n[1] Kaul P, Li Z, Yang H, et al. THRONE: An object-based hallucination benchmark for the free-form generations of large vision-language models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 27228-27238.\\n[2] Jiang C, Jia H, Dong M, et al. Hal-eval: A universal and fine-grained hallucination evaluation framework for large vision language models[C]//Proceedings of the 32nd ACM International Conference on Multimedia. 2024: 525-534.\"}", "{\"metareview\": \"The paper introduces LongHalQA, a new benchmark for evaluating hallucinations in multi-modal large language models (MLLMs). Hallucinations occur when a model generates text that misrepresents the input image. The benchmark tries to addresses the key limitations in existing evaluation methods: the simplicity of discriminative tasks and the inefficiency of open-ended generative tasks. LongHalQA consists of 6,000 complex hallucinatory texts that mimic real-world scenarios, formalized in two tasks types: hallucination discrimination and hallucination completion, which combine both discriminative and generative evaluations into a single multiple-choice format. The authors conduct experiments on a range of open-source MLLMs and the closed-source GPT4o.\\n\\nReviewers agree LongHalQA is investigating an important direction of the field, and the motivation is reasonable and practical. Concerting the problem to a single multi-choice format climate the need of LLM, which makes the benchmark more stable and reliable. \\n\\nHowever, reviewers are also concerned about the generation process, such as using\\u00a0GPT4V to\\u00a0generate\\u00a0hallucinations does not make it possible to evaluate a given model's hallucinations, the lack of verification of the benchmark. In addition, two of the reviewers also concerned about the claim of considering the multi-choice question as a generative evaluation but not a discriminative evaluation.\", \"additional_comments_on_reviewer_discussion\": \"The authors are engaged in the discussion period and provided more details about the generation process. However, the major concerns remains and additional work is needed to reach the ICLR standard.\"}", "{\"summary\": \"This paper proposes a long-context hallucination benchmark. This benchmark aims to solve two problems in the existing evaluation pipeline: it is too easy for discriminative tasks and too time-consuming for open-ended generative tasks. To achieve this, the authors propose the LongHalQA, which unifies discriminative and generative tasks as multi-choice problems. Also, they formulate the construction of LongHalQA as a pipeline to construct future hallucination benchmarks with long and complex questions and descriptions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The motivation is reasonable and practical. I think this benchmark will accelerate the development of MLLMs on hallucination. \\n3. The analysis of the experiment is relatively comprehensive.\", \"weaknesses\": \"1. A little small number of evaluated models.\\n2. No comparison between the performance of existing methods towards solving the hallucination of MLLMs. I'm interested in whether existing methods have improved on LongHalQA.\\n3. Lack of related work about the method about how to decrease the hallucination of MLLMs\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the issue of hallucination in multimodal large language models (MLLMs), where generated text doesn't match the input image. To solve problems with existing benchmarks, the authors propose LongHalQA, a new benchmark with 6,000 complex hallucinatory texts that mimic real-world scenarios. It introduces two tasks: hallucination discrimination and hallucination completion, which combine both discriminative and generative evaluations into a single multiple-choice format. This approach avoids the need for LLM evaluators, making the evaluation process more reliable and efficient. The paper also presents a new pipeline for creating complex hallucination benchmarks and provides experiments showing that recent MLLMs struggle with long, complex text.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. LongHalQA addresses the limitations of previous benchmarks by creating a comprehensive dataset of hallucination text that mirrors real-world scenarios, providing a more accurate and complex testing environment for MLLMs.\\n\\n2. By eliminating the need for LLM evaluators, the benchmark ensures more stable and reliable results, avoiding the randomness and computational intensity associated with LLM-based evaluations.\\n\\n3. The combination of both discriminative and generative evaluation tasks in a multiple-choice format allows for a holistic assessment of MLLM performance in handling hallucinations, making the evaluation process more efficient.\", \"weaknesses\": \"1. How to evaluate model with the Hallucination Completion task? What is the prefix text for evaluation? Is it the first word?\\n2. Why the Hallucination Completion can be seen as generative evaluation? The multi-choice question still is discriminative question.\\n3. \\u201cthen analyze and filter them based on dataset annotations and GroundingDINO\\u201d: how did authors analyze and filter?\\n4. Lack of comprehensive survey of hallucination on Large Vision-Language Models.\\n[1] Object hallucination in image captioning\\n[2] Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption\\n[3] FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models\\n[4] Analyzing and mitigating object hallucination in large vision-language models\\n[5] FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback\\n5. The proposed LLM-free hallucination benchmark does not offer significant advantages, as the approach still requires various tools, LVLMs, and manual verification, leading to low efficiency.\\n6. The benchmark has not demonstrated greater reliability compared to existing ones, such as through experimental validation.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. Regarding your points about generative and discriminative evaluations, efficiency issues, and benchmark validation, we reply to each of these aspects below and hope to address your concerns.\\n\\n>1. The consistency between MCQ evaluation and free-form generative evaluation and the advantage efficiency.\", \"longhalqa_includes_two_tasks\": \"hallucination discrimination and hallucination completion. Our hallucination completion task adopts a discriminative multiple-choice format to simulate generative evaluation, addressing randomness and reliance on external MLLMs and reducing evaluation costs. The task prompt we designed, \\\"Continue the following description of the image.\\\" is also closely aligned with the generative evaluation prompt, \\\"Describe the given image,\\\" rather than the traditional hallucination discrimination benchmarks, which typically focus on binary yes-or-no questions about the object existence or whether statements are true.\\n\\nWe believe that the discriminative MCQ format and free-form generative evaluation each have their strengths and can complement one another. The advantage of generative evaluation lies in allowing direct access to the model's outputs. However, it has high evaluation costs, depends on external models, and struggles to reliably assess complex hallucinations. In contrast, the MCQ format could directly leverage those challenging visual content prone to hallucinations for MLLM evaluation. MCQ also reduces evaluation costs, eliminates the need for external models, and provides more definitive assessments.\\n\\nTable 8 in our paper demonstrates the consistency between the MCQ and generative evaluation in ranking MLLMs. We also conduct the following experiment using the image description task to rank MLLMs by their hallucination levels and detailedness, with more hallucination mitigation methods evaluated in the table. Our MCQ evaluation aligns closely with this generative evaluation based on the typical image description task.\\n\\nIn terms of efficiency advantages, as shown in Figure 3, the MCQ format is significantly more efficient than the original generative evaluation by free-form completion in Table 8 or the image description task in the table below. Moreover, Figure 3 only reflects generation time and does not account for the additional evaluation time for the generative evaluations. The MCQ format could directly output accuracy scores without extra evaluation steps, resulting in even greater efficiency.\\n\\nTable 1. Comparison of MCQ evaluation and generative evaluation with the image description task. The last two columns are the performance and ranking of the MCQ hallucination completion task with image description data.\\n| | | Free | Generated | Description | | | Hall | Completion | \\n|:--------------|:---------------:|:-----------:|:----------------:|:-------------:|:--------------:|:--------------:|:------------:|:--------------:|\\n| **Methods** | **Number of Words** | **Detail Rank** | **Detail Score** | **Hall Rank** |**Hall Score**| **Avg. Rank** | **MCQ Rank** | **Desc.(MCQ)** |\\n| MiniCPM-V2 | 135.3 | 2 | 6.45 | 3 | 6.66 | 2 | 2 | 44.07 |\\n| Qwen2-VL-2B | 107.6 | 3 | 6.34 | 2 | 6.78 | 1 | 1 | 47.18 |\\n| LLaVA 1.6-7B | 177.4 | 1 | 6.52 | 6 | 5.83 | 4 | 3 | 39.47 |\\n| LLaVA 1.5-7B | 104.8 | 8 | 5.46 | 10 | 5.38 | 9 | 9 | 32.80 |\\n| + POVID | 95.5 | 10 | 5.34 | 5 | 5.98 | 6 | 7 | 36.25 |\\n| + RLAIL | 104.1 | 7 | 5.52 | 1 | 6.80 | 4 | 6 | 36.48 |\\n| LLaVA 1.5-13B | 102.9 | 11 | 5.33 | 11 | 5.25 | 11 | 11 | 31.53 |\\n| + LLaVA-RLHF | 116.5 | 6 | 5.54 | 7 | 5.78 | 6 | 4 | 37.17 |\\n| Qwen-VL-Chat | 97.3 | 5 | 5.70 | 9 | 5.55 | 8 | 8 | 33.14 |\\n| + Silkie | 99.2 | 4 | 6.23 | 4 | 6.34 | 3 | 4 | 37.17 |\\n| Muffin | 100.8 | 9 | 5.45 | 11 | 5.25 | 10 | 12 | 17.15 |\\n| +RLHF-V | 51.3 | 12 | 4.33 | 8 | 5.60 | 12 | 10 | 31.99 |\"}", "{\"comment\": \"Thank you very much for your feedback.\\n\\nHowever, we want to argue that LongHalQA achieves robust evaluation of these hallucination-tackling methods over their baselines. It is our mistake that the formatting in the above comment is unclear and does not directly present the comparison between these methods and their respective baselines. The complete performances are as follows:\", \"table_1\": \"MCQ Accuracy of methods that tackle multimodal hallucination on the Hallucination Completion task of LongHalQA.\\n| Accuracy | Description | Conversation | Average |\\n|:---------------------|:--------------------------:|:--------------------------:|:--------------------------:|\\n| LLaVA 1.5-7B | 32.80\\t| 39.37 | 36.08 |\\n| + POVID | 36.25 |\\t40.86 | 38.55 |\\n| + RLAIF | 36.48\\t| 44.02 | 40.25 |\\n|------------------------|------------------------|------------------------|------------------------|\\n| LLaVA 1.5-13B | 31.53 | 43.62 | 37.58 |\\n| + LLaVA-RLHF | 37.17 | 44.02 | 40.60 |\\n|------------------------|------------------------|------------------------|------------------------|\\n| Qwen-VL-Chat | 33.14 | 40.00 | 36.57 |\\n| + Silkie | 37.17 | 40.63 | 38.90 |\\n|------------------------|------------------------|------------------------|------------------------|\\n| Muffin | 17.15 | 26.46 | 21.80 |\\n| +RLHF-V | 31.99 |\\t31.02 | 31.50 |\\n\\nAll the methods have robustly improved the baseline model on the hallucination completion task. Most methods primarily improve their performances on the description data, which aligns with their training from the image description task. Methods incorporating more diverse tasks and datasets, such as RLAIF, also achieve promising improvements for the conversation data. These comparisons demonstrate that LongHalQA can robustly evaluate the effectiveness of these methods in mitigating multimodal hallucinations.\\n\\nIn contrast, the performance of hallucination-tackling methods under free-form completion, compared to LongHalQA, is indeed less robust. It is influenced by various factors, such as the model's instruction-following ability after re-tuning, changes in output length, and updates of the evaluator (e.g., GPT). Our MCQ completion task could largely avoid these issues and provide a more robust evaluation, supplementing the existing generative benchmarks with evaluation from external MLLMs.\"}", "{\"comment\": \"Thanks for the response and effort of conducting more evaluations. The results however do not address my concern about the consistency of MCQ evaluation and generative evaluation. I would change my score to 5 as the results seem to be non-robust on methods tackling multimodal hallucination.\"}", "{\"title\": \"Issues remain\", \"comment\": \"While I appreciate the responses to the points I have raised, I do believe that many of the questions remain:\\n\\n### Logic of the benchmark\\nI still maintain that using GPT4V to _generate_ hallucinations does not make it possible to evaluate a given model's hallucinations.\\nThe authors propose to ask an MLLM of interest to determine hallucinations amongst a set of answers generated by another model (GPT4V). Taking Figure 1 as an example: the description provided contains numerous concepts that an MLLM of interest _given the chance_ may hallucinate about: (a) species of the animal drawing the carriage; (b) colour of the animal; (c) colour of the carriage; (d) colour of the wheels; (e) clothes of the man (I suspect GPT4V might've incorrect said 'vest' instead of 'shirt') etc. etc.\\nNone of these potential hallucinations of a model are being analysed in this work because the model has never been asked to or given the \\\"freedom\\\" to hallucinate.\\n\\nMoreover, I agree with the point raised by Reviewer Xq4o that one of the two tasks \\\"Hallucination Completion\\\" is still not really generative and really asks the model to discriminate between options provided by GPT4V. This counts against the authors' claim that they are doing generative evaluation of an MLLM of interest.\\n\\nFinally, you rightly argue that previous methods require LLM evaluators and additional inference time to generate MLLM hallucinations, but I would note that this work requires human effort to manually verify that only one hallucination has been made and that the hallucinations are correct. The former is much more scalable than the latter as LLM evaluators are becoming more powerful for the same compute budget and the cost to generate a token from a given LLM is likely to decrease with time.\\n\\nWhile I agree with the authors that the _evaluation_ process of the method is relatively simple because it is in effect a complex MCQ benchmark. The _creation_ process requires human effort for each potential question and I am not convinced the method is actually measures what a hallucination benchmark should. Particularly with regards to the claim of \\\"generative evaluation\\\", this claim is fundamentally wrong. On the above basis I am reluctant to change my score, despite the clear effort from the authors.\\n\\n### Details\\nThank you to the authors for providing clarifications. I believe that all of that which has been provided should be in the main body of the paper (and more), the details of how benchmarks are created and used is important for the readers alongside clear code. I do not appear to see any paper revisions which incorporates these additional details and I personally think this work could do with more time to be structured such that these important details are included.\\n\\nI thank the authors for their efforts, but I do not believe the fundamental concerns I raised have been addressed.\"}", "{\"comment\": \"We sincerely thanks for reviews' valuable feedback and the positive evaluation of our work. Below, we respond to each of the raised concerns.\\n\\n> 1. A little small number of evaluated models.\\n\\nThank you for your suggestion. Due to the time constraint, we selected only a few widely adopted MLLMs for evaluation. Additionally, we skipped some MLLMs that do not support long-context scenarios due to limitations on context length, such as InstructionBLIP and MiniGPT4 series, which further limited our selection. We will continuously evaluate and report the results of other MLLMs on LongHalQA in the future.\\n\\n> 2. No comparison between the performance of existing methods towards solving the hallucination of MLLMs. I'm interested in whether existing methods have improved on LongHalQA.\\n\\nThank you for your valuable suggestion. We evaluate several methods that aim at mitigating hallucinations in MLLMs as shown in the table below. We also show the performance of their baseline MLLMs for comparison. Most of these methods employ RLHF (Reinforcement Learning from Human Feedback) or DPO (Direct Preference Optimization) to refine MLLM output preferences and reduce hallucinations.\", \"several_points_can_be_observed_from_our_evaluations\": \"(1) Most methods use image description tasks to construct preference data, which is well aligned with our observation that most methods significantly improve over the baseline model for image description in the completion task.\\n\\n(2) The preference optimization based on image description also effectively improves performances in discriminating hallucinations of object description, consistent with their gains observed on prior benchmarks such as POPE. However, these methods showed limited improvements for discrimination tasks under the long-context setting, which involves detailed image descriptions and multi-round conversation. When models are tasked to identify the reasons for hallucinations, their performances drop mostly on the hallucination discrimination task under multiple-choice question (MCQ) settings.\\n\\n(3) The preference optimization based on the image description task yields limited benefits for multimodal conversational capabilities. For example, CSR and POVID result in performance drops to around 20\\\\% on discrimination tasks for the conversation. We found this is largely due to a decrease in instruction-following capabilities, where they fail to process long-context queries and correctly output the option letter. Furthermore, the improvements in hallucination completion tasks for conversation data are also limited compared to their baseline models.\\n\\nThese findings highlight the current limitations of preference-based hallucination mitigation methods in addressing long-context scenarios, as well as the lack of diversity in tasks and data formats. We conjecture that improving the diversity and length of the constructed preference data has great potential to mitigate these issues.\", \"table_1\": \"Experiments on methods for mitigating hallucinations of MLLMs on LongHalQA. ''Obj.'', ''Des.'' and ''Con.'' are the data formats of object description, detailed image description, and multi-round conversations in LongHalQA. Columns 2-6 are the results of the hallucination discrimination task, and columns 7-8 are the results of the hallucination completion task.\\n\\n| | | Hallucination | Discrimination | | | Hallucination | Completion |\\n|:------------------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|\\n| **Methods** | **Obj.(Binary)** | **Des.(Binary)** | **Con.(Binary)** | **Des.(MCQ)** | **Con.(MCQ)** | **Des.** | **Con.** |\\n| LLaVA-1.5-7B | 45.18 | 36.59 | 33.79 | 37.17 | 32.92 | 32.80 | 39.37 |\\n| + CSR | 44.52 | 36.59 | 31.30 | 36.66 | 20.16 | 32.86 | 36.54 |\\n| + POVID | 45.69 | 36.66 | 32.54 | 37.03 | 21.20 | 36.25 | 40.86 |\\n| + RLAIF | 60.29 | 36.88 | 35.79 | 35.28 | 30.17 | 36.48 | 44.02 |\\n| LLaVA-1.5-13B | 52.70 | 36.95 | 35.85 | 45.99 | 41.21 | 31.53 | 43.62 |\\n| + LLaVA-RLHF | 48.83 | 36.66 | 36.91 | 41.11 | 37.78 | 37.17 | 44.02 |\\n| Qwen-VL-Chat | 58.69 | 36.66 | 34.29 | 37.97 | 36.10 | 33.14 | 40.00 |\\n| + Silkie | 62.19 | 36.95 | 35.22 | 37.54 | 35.29 | 37.17 | 40.63 |\\n| Muffin-13B | 46.50 | 52.99 | 67.46 | 38.85 | 30.92 | 17.15 | 26.46 |\\n| + RLHF-V | 60.29 | 38.19 | 40.90 | 38.12 | 25.62 | 31.99 | 31.02 |\\n\\n> 3. Lack of related work about the method about how to decrease the hallucination of MLLMs.\\n\\nThank you for your suggestion. We will add a discussion on hallucination mitigation methods in the related work section.\"}", "{\"summary\": \"This paper proposes a new MLLM hallucination benchmark consisting of both hallucination discrimination and hallucination completion questions. The author unifies both discriminative and generative hallucination evaluation into the form of multiple-choice question where models only have to decode one token as response. The results show the proposed benchmark is challenging for both open-source MLLMs in varying sizes and strong GPT-4o.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed benchmark can contribute the further development of this field tackling and analyzing the hallucination of MLLMs.\\n2. The proposed unification of discriminative question and generative question largely saves the evaluation cost via reducing the decoding sequence length.\", \"weaknesses\": \"1. Experimental results in Table 8 do not suggest a strong consistency between generation accuracy and mcq accuracy. For example, Fuyu-8b and LLaVA 1.5-7b exhibits score difference -12.41 in mcq while -41.0 in generation. It is necessary to include more methods into consideration, especially thous proposed to tackling hallucination of MLLMs such as LLaVA-RLHF, RLHF-V, Silkie and POVID.\\n2. Hallucination pairs are generated by GPT-4V, which are prone to generate hallucinated visual description. The author have to explain how #317 controls the generation quality.\", \"questions\": \"1. It is known that [LLMs are non-robust multiple-choice selectors](https://arxiv.org/abs/2309.03882). How do you tackle this problem during constructing this benchmark?\\n2. #419 mentions the 'ranking-based accuracy` of Fuyu-8B, while I could not find the corresponding results in Table 4. It is a writing issue?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> 2. Hallucination pairs are generated by GPT-4V, which are prone to generate hallucinated visual description. The author have to explain how #317 controls the generation quality.\\n\\nThank you for raising this question. We would like to clarify that this step does not require GPT to describe the image from scratch. Instead, GPT only needs to revise its previously generated data based on the sentence-level analysis results obtained in the previous hallucination-checking step to construct the hallucination pairs. Although GPT-4V does face significant challenges with multimodal hallucinations, it still demonstrates strong text-processing capability, which is what we primarily rely on for constructing hallucination pairs. We also conducted random checks on the hallucination pairs generated by GPT and found that they generally met our requirements in revising the hallucinatory texts.\\n\\n> 3. It is known that LLMs are non-robust multiple-choice selectors. How do you tackle this problem during constructing this benchmark?\\n\\nWe have considered this fact in constructing our MCQ benchmark. As mentioned in L261, We randomly shuffle the orders of the four options for each MCQ to reduce the impact of option order.\\n\\n> 4. #419 mentions the 'ranking-based accuracy` of Fuyu-8B, while I could not find the corresponding results in Table 4. It is a writing issue?\\n\\nThank you for pointing this out. This is indeed a writing error on our part. We will revise the text in the paper.\"}", "{\"comment\": \"> 6. The benchmark has not demonstrated greater reliability compared to existing ones, such as through experimental validation.\\n\\nThank you for raising this point. To demonstrate LongHalQA's reliability, we evaluate several hallucination mitigation methods on LongHalQA and compare the evaluation results with those over existing benchmarks, including POPE (for object hallucinations), MMHal (comprehensive questions about image content with answers scored by GPT), and MRHal (hallucinations for multi-round dialogue evaluated by GPT). The improvements of evaluated methods over LongHalQA (for object descriptions under the hallucination discrimination task, image descriptions, and multi-round conversations under the hallucination completion task) are generally consistent with those achieved over POPE, MMHal, and MRHal, demonstrating the reliability of LongHalQA in MLLM evaluations.\\n\\nFurthermore, several points can be observed through a comprehensive analysis of the evaluation results with the LongHalQA:\\n\\n(1) Most methods use image description tasks to construct preference data, and thus we observe that most methods significantly improve over the baseline for image description data in the completion task.\\n\\n(2) This preference optimization based on image description also effectively improves performances in discriminating hallucinations of object description, consistent with the gains observed on prior benchmarks such as POPE. However, these methods showed limited improvements for discrimination tasks in long-context settings, which involve detailed image descriptions and multi-round conversation. When models are required to identify the reasons for hallucinations correctly, most of their performances for hallucination discrimination tasks under multiple-choice question (MCQ) settings decrease.\\n\\n(3) The preference optimization based on the image description task yields limited benefits for multimodal conversational capabilities. For example, CSR and POVID result in performance drops to around 20\\\\% on discrimination tasks for the conversation. We found this is due to the impaired instruction-following capabilities, where they fail to process long-context queries and correctly output the option letter. Furthermore, the improvements in hallucination completion tasks for conversation data are also limited compared to their baseline models.\\n\\nThese findings highlight the current limitations of preference-based hallucination mitigation methods in handling long-context scenarios, as well as the lack of diversity in tasks and data formats. Increasing the diversity and length of constructed preference data might help address these issues. Overall, these findings underscore the distinctive value of LongHalQA in evaluating complex hallucinations in MLLMs within long-context scenarios.\", \"table_1\": \"Experiments of methods for mitigating hallucinations of MLLMs on existing benchmarks.\\n| Method | POPE | MMHal | MRHal |\\n|:------------|:------|:------|:------|\\n| LLaVA-1.5-7B | 85.9 | 2.36 | 3.38 |\\n| + CSR | 87.1 | - | - |\\n| + POVID | 86.9 | 2.69 | 3.46 |\\n| + RLAIF | - | 3.06 | - |\\n| LLaVA-1.5-13B | 85.9 | - | 3.58 |\\n| + LLaVA-RLHF | - | - | - |\\n| Qwen-VL-Chat | 87.1 | 2.89 | 3.71 |\\n| + Silkie | - | 3.02 | 3.71 |\\n| Muffin-13B | - | - | - |\\n| + RLHF-V | - | 2.45 | 2.54 |\", \"table_2\": \"Experiments on methods for mitigating hallucinations of MLLMs on LongHalQA. ''Obj.'', ''Des.'' and ''Con.'' are the data formats of object description, detailed image description, and multi-round conversations in LongHalQA. Columns 2-6 are the results of the hallucination discrimination task, and columns 7-8 are the results of the hallucination completion task.\\n\\n| | | Hallucination | Discrimination | | | Hallucination | Completion |\\n|:------------|:------:|:------:|:------:|:------:|:------:|:------:|:------:|\\n| **Methods** | **Obj.(Binary)** | **Des.(Binary)** | **Con.(Binary)** | **Des.(MCQ)** | **Con.(MCQ)** | **Des.** | **Con.** |\\n| LLaVA-1.5-7B | 45.18 | 36.59 | 33.79 | 37.17 | 32.92 | 32.80 | 39.37 |\\n| + CSR | 44.52 | 36.59 | 31.30 | 36.66 | 20.16 | 32.86 | 36.54 |\\n| + POVID | 45.69 | 36.66 | 32.54 | 37.03 | 21.20 | 36.25 | 40.86 |\\n| + RLAIF | 60.29 | 36.88 | 35.79 | 35.28 | 30.17 | 36.48 | 44.02 |\\n| LLaVA-1.5-13B | 52.70 | 36.95 | 35.85 | 45.99 | 41.21 | 31.53 | 43.62 |\\n| + LLaVA-RLHF | 48.83 | 36.66 | 36.91 | 41.11 | 37.78 | 37.17 | 44.02 |\\n| Qwen-VL-Chat | 58.69 | 36.66 | 34.29 | 37.97 | 36.10 | 33.14 | 40.00 |\\n| + Silkie | 62.19 | 36.95 | 35.22 | 37.54 | 35.29 | 37.17 | 40.63 |\\n| Muffin-13B | 46.50 | 52.99 | 67.46 | 38.85 | 30.92 | 17.15 | 26.46 |\\n| + RLHF-V | 60.29 | 38.19 | 40.90 | 38.12 | 25.62 | 31.99 | 31.02 |\"}", "{\"title\": \"Looking forward to further discussion\", \"comment\": \"We sincerely appreciate your insightful and valuable comments. We have further supplemented comparisons between free-form generation and our MCQ approach on the image description task to demonstrate the consistency between the MCQ and generative evaluation, as well as human validation results to prove the reliability of the LongHalQA benchmark. As the discussion phase is about to close, we kindly ask you to take a few minutes to review our responses. If our responses have clarified your concerns, we hope you might consider raising the score. We look forward to hearing from you about any further feedback or suggestions for improving our work.\"}", "{\"summary\": \"This paper proposes a new benchmark for evaluating hallucinations in multi-modal large language models (MLLMs).\\nThe paper makes use of GPT4V to generate image-level and object-level descriptions and conversation data for a set of images from VisualGenome. These wider range of generated data enables the proposed benchmark, LongHalQA, to evaluate various types of potential hallucination which go beyond the typical object level analysis (e.g. Is there a cat in the image?). The proposed method suggests two types of evaluation: (1) Hallucination Discrimination - the model must answer a MCQ about generated data (potentially containing hallucinations), to determine if the generated data contains hallucinations based on the image and the cause of the hallucination if present; (2) Hallucination Completion - the model must answer a MCQ, correctly selecting the answer which truthfully completes a partial conversation or description.\\nThe authors conduct experiments on a range of open-source MLLMs and the closed-source GPT4o. They show that CoT prompting often has little or negative effect on results on LongHalQA. Finally they conduct a study in which hallucinations in free-form generations from their questions yield similar results to using their MCQ formulation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper correctly identifies that many prior hallucination works focus on the narrow topic of object existence at an image level. To overcome they create questions which expand the evaluation to object level descriptions, object locations, attributes etc.\\n\\nTheir experimental results are numerous and allow the reader see the advantages/disadvantages of each model in the different types of question in LongHalQA (Table 2-5).\\n\\nThe authors make comparisons of their MCQ method to a free-form generation method in Section 6 and demonstrate the advantages of using MCQ over a free-form method in terms of efficiency of evaluation.\", \"weaknesses\": \"I have two main weaknesses with this paper, unfortunately both of which I consider pretty major.\\n\\n### 1. The logic behind the creation of the benchmark itself.\\n\\nAs detailed in Section 4, all of the LongHalQA data comes from generations with GPT4V, this includes the descriptions, conversations etc. These generations are then analysed/modified with a number of checks. Furthermore, the question options themselves are generated with GPT4V. Therefore when evaluating a model X using LongHalQA, you are conditioning all reasoning/grounding/recognition of model X on the range of hallucinations GPT4V might make. This leaves a large range of potential hallucinations that are specific to model X which are left to be analysed, which may only be obtained by generating descriptions/conversations using model X rather than GPT4V. Taking Figure 1, GPT4V and the method used in Section 4 have created a hallucination regarding the number of people seated in the carriage. Now this is a hallucination of GPT4V + Section 4, _not_ of model X. Model X may have hallucinated the species of animal, the colour of the carriage etc, all of which is left potentially undiscovered because the hallucinations model X is asked to evaluate in LongHalQA are not its own, I therefore find the logic of this benchmark slightly confused. The free-form generations of methods like that of Kaul et al. and Jiang et al. referenced in the paper need the model being evaluated e.g. Model X to _actually generate_ its own text and therefore its own potential hallucinations.\\n\\n### 2. Lack of details and clarity.\\n\\nThe crucial step in this work is the generation of the data for LongHalQA, detailed in Section 4. I find this section to be extremely thin on details and lack clarity.\\n1. L291 \\\"...then analyze and filter them based on dataset annotations and GroundingDINO...\\\", no information is given on how this process is done.\\n2. L297, \\\"as illustrated in Appendix B.\\\" Appendix B contains a list of definitions of hallucinations used in this work.\\n3. L303, \\\"Second, names of object present in the data are extracted, and certain image understanding tools such as GroundingDINO...\\\", there are no details on how objects present in the data are extracted, which data? VG annotations or names in the GPT4V generated data or both? Which image understanding tools other than GroundingDINO are used?\\n3. L314-319, GPT4V is being used to generated hallucination explanation pairs, but there is no indication that manual checking is used here despite the authors accepting that GPT4V suffers from \\\"sever hallucinations\\\" (L298), the logic here seems confused on the ability of GPT4V to create such specific data which only contains one error which is also useful for evaluation.\\n4. L320-L346, same arguments as above with the ability of GPT4V to this accurately.\\n5. L344 \\\"except the hallucination checking that involves optional human verification\\\" does this mean human verification is used or not? What is the effect of using human verification in the data vs not?\\n\\nAdditionally as a more general point, the prompt templates used in Appendix C are extremely hard to follow without any examples, e.g. in Figure 6 what is \\\"Possible Content\\\"? The main text asks the reader to refer to Appendix C (L465) for details and then appears to simply paste the prompts used with no explanation of what goes where.\", \"questions\": \"### 1. Logic Behind the Benchmark Creation\\n1. Since all LongHalQA data is generated by GPT-4V, isn\\u2019t Model X limited to analysing GPT-4V\\u2019s specific hallucinations rather than its own?\\n2. Could Model X be missing its unique hallucinations because it doesn\\u2019t generate its own descriptions or conversations in LongHalQA?\\n3. Wouldn\\u2019t a model evaluation approach where Model X generates its own text reveal more relevant hallucinations, as done in Kaul et al. and Jiang et al.?\\n\\n### 2. Lack of Details and Clarity\\n1. How are dataset annotations and GroundingDINO used to filter the LongHalQA data? Can details on this process be provided?\\n2. How are objects identified in the data? Are these from VG annotations, GPT-4V data, or both?\\n3. Other than GroundingDINO, which image understanding tools are used, and how?\\n4. If GPT-4V produces hallucination explanation pairs, is there manual verification, especially given its acknowledged hallucination issues (L298)?\\n5. In what cases is human verification used for hallucination checking, and how does it impact the dataset?\\n\\n### Additional Clarity\\n1. Can examples be given to the prompt templates in Appendix C to clarify instructions like \\\"Possible Content\\\" etc. (Figure 6)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"To better demonstrate the consistency between MCQ and generative evaluation, we include a free-generation image description task for comparison. This task is extensively trained across all MLLMs, reducing the influence of factors such as instruction-following ability. For each MLLM, we use the prompt \\\"Describe the given image in detail.\\\" to generate image descriptions. We then feed the image, the reference description from LongHalQA, and object annotations from Objects365 to GPT, and ask GPT to score the MLLMs generated descriptions in terms of detail level and hallucination degree. Both scores range from 0 to 10, with higher scores indicating better performance(more detailed and less hallucination content). The results are shown in the table below.\\n\\nIn Table 1, most MLLMs achieve similar rankings under MCQ(Desc. Acc.) and generation-based(Hall. Score) evaluations. The only differences in ranking appear with LLaVA-v1.6-7B and hallucination mitigation methods like RLAIL and RLHF-V. We argue that these differences primarily arise from differences in detail levels. Most hallucination mitigation methods rank much higher for hallucination scores than for detail scores in generative evaluations. MLLMs become more careful and expect to generate less detailed content to reduce hallucination risk.[1] In contrast, LLaVA-v1.6-7B outputs nearly twice as much content as other MLLMs, achieving the highest detail score but also producing more hallucinations in quantity. These differences highlight the importance of considering both detail and hallucination levels for a comprehensive evaluation of MLLMs' hallucination levels.\\n\\nOn the contrary, our proposed MCQ evaluation directly queries MLLMs with the same challenging detailed content, eliminating the influence of varying detail levels of MLLMs in responding. The rankings from our MCQ evaluation also align more closely with the combined detail and hallucination scores from generative evaluations, especially for those hallucination mitigation methods. These results demonstrate the consistency between MCQ evaluation in LongHalQA and generative evaluation.\\n\\nTable 1. Comparison of MCQ evaluation and generative evaluation with the image description task. The last two columns are the performance and ranking of the MCQ hallucination completion task with image description data.\\n| | | Free | Generated | Description | | | Hall | Completion | \\n|:--------------|:---------------:|:-----------:|:----------------:|:-------------:|:--------------:|:--------------:|:------------:|:--------------:|\\n| **Methods** | **Number of Words** | **Detail Rank** | **Detail Score** | **Hall Rank** |**Hall Score**| **Avg. Rank** | **MCQ Rank** | **Desc.(MCQ)** |\\n| MiniCPM-V2 | 135.3 | 2 | 6.45 | 3 | 6.66 | 2 | 2 | 44.07 |\\n| Qwen2-VL-2B | 107.6 | 3 | 6.34 | 2 | 6.78 | 1 | 1 | 47.18 |\\n| LLaVA 1.6-7B | 177.4 | 1 | 6.52 | 6 | 5.83 | 4 | 3 | 39.47 |\\n| LLaVA 1.5-7B | 104.8 | 8 | 5.46 | 10 | 5.38 | 9 | 9 | 32.80 |\\n| + POVID | 95.5 | 10 | 5.34 | 5 | 5.98 | 6 | 7 | 36.25 |\\n| + RLAIL | 104.1 | 7 | 5.52 | 1 | 6.80 | 4 | 6 | 36.48 |\\n| LLaVA 1.5-13B | 102.9 | 11 | 5.33 | 11 | 5.25 | 11 | 11 | 31.53 |\\n| + LLaVA-RLHF | 116.5 | 6 | 5.54 | 7 | 5.78 | 6 | 4 | 37.17 |\\n| Qwen-VL-Chat | 97.3 | 5 | 5.70 | 9 | 5.55 | 8 | 8 | 33.14 |\\n| + Silkie | 99.2 | 4 | 6.23 | 4 | 6.34 | 3 | 4 | 37.17 |\\n| Muffin | 100.8 | 9 | 5.45 | 11 | 5.25 | 10 | 12 | 17.15 |\\n| +RLHF-V | 51.3 | 12 | 4.33 | 8 | 5.60 | 12 | 10 | 31.99 |\\n\\n\\n[1] Yue Z, Zhang L, Jin Q. Less is more: Mitigating multimodal hallucination from an eos decision perspective[J]. arXiv preprint arXiv:2402.14545, 2024.\"}", "{\"comment\": \"> **Consistency between the hallucination completion task of LongHalQA and typical generative evaluation.**\\n\\nIn Table 8 of the paper, we compare free-form generation and MCQ evaluations, showing that the rankings are largely consistent across both settings. To further demonstrate this consistency, we conduct additional experiments using the typical image description task to assess MLLM hallucination levels while also introducing more methods, such as those designed to mitigate multimodal hallucinations. For each MLLM, we use the prompt \\\"Describe the given image in detail.\\\" to generate image descriptions. We then feed the image, the reference description from LongHalQA, and object annotations from Objects365 to GPT, and ask GPT to score the MLLMs generated descriptions in terms of detail level and hallucination degree. Both scores range from 0 to 10, with higher scores indicating better performance(more detailed and less hallucination content). The results are shown in the table below.\\n\\nThe table below shows that LongHalQA achieves rankings consistent with generative evaluations that consider both hallucination levels and detailedness. This indicates that directly using visually challenging and hallucination-prone content to query MLLMs can yield results similar to directly evaluating their generated content. Such challenging content is likely to induce hallucinations in most MLLMs, making it effective in being used to evaluate the hallucination level of MLLMs. In the future, we plan to collect more hallucinated content from advanced MLLMs to expand LongHalQA further.\\n\\nTable 1. Comparison of MCQ evaluation and generative evaluation with the image description task. The last two columns are the performance and ranking of the MCQ hallucination completion task with image description data.\\n| | | Free | Generated | Description | | | Hall | Completion | \\n|:--------------|:---------------:|:-----------:|:----------------:|:-------------:|:--------------:|:--------------:|:------------:|:--------------:|\\n| **Methods** | **Number of Words** | **Detail Rank** | **Detail Score** | **Hall Rank** |**Hall Score**| **Avg. Rank** | **MCQ Rank** | **Desc.(MCQ)** |\\n| MiniCPM-V2 | 135.3 | 2 | 6.45 | 3 | 6.66 | 2 | 2 | 44.07 |\\n| Qwen2-VL-2B | 107.6 | 3 | 6.34 | 2 | 6.78 | 1 | 1 | 47.18 |\\n| LLaVA 1.6-7B | 177.4 | 1 | 6.52 | 6 | 5.83 | 4 | 3 | 39.47 |\\n| LLaVA 1.5-7B | 104.8 | 8 | 5.46 | 10 | 5.38 | 9 | 9 | 32.80 |\\n| + POVID | 95.5 | 10 | 5.34 | 5 | 5.98 | 6 | 7 | 36.25 |\\n| + RLAIL | 104.1 | 7 | 5.52 | 1 | 6.80 | 4 | 6 | 36.48 |\\n| LLaVA 1.5-13B | 102.9 | 11 | 5.33 | 11 | 5.25 | 11 | 11 | 31.53 |\\n| + LLaVA-RLHF | 116.5 | 6 | 5.54 | 7 | 5.78 | 6 | 4 | 37.17 |\\n| Qwen-VL-Chat | 97.3 | 5 | 5.70 | 9 | 5.55 | 8 | 8 | 33.14 |\\n| + Silkie | 99.2 | 4 | 6.23 | 4 | 6.34 | 3 | 4 | 37.17 |\\n| Muffin | 100.8 | 9 | 5.45 | 11 | 5.25 | 10 | 12 | 17.15 |\\n| +RLHF-V | 51.3 | 12 | 4.33 | 8 | 5.60 | 12 | 10 | 31.99 |\"}", "{\"comment\": \"Thanks for your response. I will keep my score based on our discussion.\"}", "{\"comment\": \"Thanks for your response. The response still doesn't address my concerns, such as considering the multi-choice question as a generative evaluation but not a discriminative evaluation, the advantage of efficiency, and the lack of verification of the benchmark (such as human correlation). Therefore, I will keep my score.\"}", "{\"comment\": \"### **Reply to Lack of Details and Clarity**\\n\\nWe sincerely appreciate your pointing out these issues. We provide further elaborations below and will revise the paper to clarify these details.\\n\\n> 1. How are dataset annotations and GroundingDINO used to filter the LongHalQA data? Can details on this process be provided?\\n\\nWe leverage GroundingDINO to remove those inaccurate annotated boxes labeled as \\\"crowd\\\" and remove images that lack sufficient complexity or richness of content~(such as containing less than five objects).\\n\\n> 2. How are objects identified in the data? Are these from VG annotations, GPT-4V data, or both?\\n\\nHere, we employ the GPT to extract the object phrases from its generated data, such as \\\"horse-drawn carriage\\\", \\\"brown horse\\\", and \\\"three passengers in the carriage\\\" as illustrated in Figure 1. These object phrases are fed into GroundingDINO to detect relative bounding boxes. We upload the detection results to employ GPT to check the object-related hallucinations in the generated data.\\n\\n> 3. Other than GroundingDINO, which image understanding tools are used, and how?\\n\\nWe mainly adopt GroundingDINO and GPT itself in checking the GPT-generated data. We used GPT to conduct multiple rounds of sentence-level verification using different prompts. These prompts include grounding results from GroundingDINO and potential hallucination types, as outlined in Table 1. Finally, we employ GPT to summarize the verification results and provide a report indicating whether each sentence in the generated data contained hallucinations and the corresponding analysis.\\n\\n> 4. (1) L314-319, GPT4V is being used to generated hallucination explanation pairs, but there is no indication that manual checking is used here despite the authors accepting that GPT4V suffers from \\\"sever hallucinations\\\" (L298), the logic here seems confused on the ability of GPT4V to create such specific data which only contains one error which is also useful for evaluation. (2) L320-L346, same arguments as above with the ability of GPT4V to this accurately.\\n\\nWe would like to clarify that the severe hallucinations mentioned in L298 specifically refer to multimodal hallucinations produced by GPT when describing image content. However, GPT's text processing capability remains reliable. In L314-346, we mainly leveraged GPT's text-processing abilities to construct hallucination explanation pairs, as well as questions and options. GPT only needs to modify text based on sentence-level analysis results and format it according to LongHalQA's requirements, which is rarely affected by multimodal hallucination issues.\\n\\n> 5. (1) If GPT-4V produces hallucination explanation pairs, is there manual verification, especially given its acknowledged hallucination issues (L298)? (2) In what cases is human verification used for hallucination checking, and how does it impact the dataset? (3) L344 \\\"except the hallucination checking that involves optional human verification\\\" does this mean human verification is used or not? What is the effect of using human verification in the data vs not?\\n\\nWe deploy human verification for the sentence-level hallucination analysis as summarized by GPT. In the hallucination check process, we leverage GPT to label each sentence in the generated data as either \\\"Match\\\" or \\\"Do not Match,\\\" along with corresponding hallucination explanations, as illustrated in Figure 7. Human evaluators then review the data and determine whether to accept the analysis. The verification process for most data is quite efficient, typically taking about one or two minutes. If the analysis is not accepted, the human evaluator needs to revise the label or the analysis. This step is crucial to ensure the correctness of our data. Though simpler object or attribute hallucinations can be mostly identified by previous multi-round checks and the help of the object detector, more complex hallucinations, such as those shown in Figure 2, require the human evaluator for verification. These human-verified complex hallucinations also enrich the diversity of hallucinations covered in LongHalQA and increase the overall difficulty of the benchmark.\\n\\n> 6. Can examples be given to the prompt templates in Appendix C to clarify instructions like \\\"Possible Content\\\" etc. (Figure 6)?\\n\\nIn Figure 6, we first provide a prompt template, followed by specific examples of prompts for different data formats (e.g., object descriptions, image descriptions, multi-turn conversations). The \\\"Possible Content\\\" denotes \\\"object types, colors, states, actions, number of objects, precise object locations, texts or OCR results, relationships or relative positions between objects, etc.\\\". We will update Figure 6-9, as well as their captions, to make the presentation clearer.\"}", "{\"title\": \"Looking forward to further discussion\", \"comment\": \"We sincerely appreciate your insightful and valuable comments. We have carefully addressed the main concerns in detail through experiments and explanations. We kindly ask for a few minutes of your time to check our responses. If our responses have clarified your concerns, we hope you might consider raising your evaluation of our paper. As the discussion phase is about to close, we look forward to hearing from you about any further feedback or suggestions for improving our work. We will be very happy to clarify any further concerns (if any).\"}", "{\"comment\": \"Thanks very much for your insightful comments. They are very helpful in improving our paper. In the following, we first state your comments and follow with our point-to-point response.\\n\\n> 1. How to evaluate model with the Hallucination Completion task? What is the prefix text for evaluation? Is it the first word?\", \"here_is_an_intact_example_of_our_query_for_the_hallucination_completion_task\": \"```\\nContinue the following description of the image.\\nThe image showcases a wooden desk in a room scattered with various electronic devices and objects. A laptop with an open page is centrally placed on the desk. On the left, there is a desk phone connected to a cord. To the right of the laptop, there is a second desk phone and a printer. Multiple cards, notepads,\\nA. a mobile phone charger, and a pen are also visible on the desk.\\nB. and a pen are also visible on the desk.\\nC. and markers are also visible on the desk.\\nD. a pair of earbud headphones, and a pen are also visible on the desk.\\nAnswer with the option's letter from the given choices directly.\\n```\\nWe will include more question examples from LongHalQA in the appendix for clarity.\\n\\n> 2. Why the Hallucination Completion can be seen as generative evaluation? The multi-choice question still is discriminative question.\\n\\nThank you for this insightful question. We believe that the Hallucination Completion task in LongHalQA provides a structured way to quantify MLLMs' generative quality, which complements existing generative benchmarks by addressing their limitations in terms of data and hallucination diversity, efficiency, and evaluation randomness. By providing multiple plausible options, we simulate the generative scenario of MLLMs within a framework that is easier to compare and score. As demonstrated in Table 8, the rankings of hallucination levels under the MCQ completion task are largely consistent with those from open-ended generative scenarios, demonstrating that the proposed MCQ task is able to capture the generative capabilities of MLLMs.\\n\\nIn addition, the MCQ hallucination completion task effectively simulates the sentence-level beam search generation process of MLLMs. Specifically, given the pre-generated text, MLLMs are required to generate multiple candidate options and then select the most reasonable, hallucination-free option as the completion result. Under such a scenario, if a model consistently selects non-hallucinatory candidates for each generated sentence, it can be considered to have a relatively low level of generative hallucinations.\\n\\n> 3. \\u201cthen analyze and filter them based on dataset annotations and GroundingDINO\\u201d: how did authors analyze and filter?\\n\\nWe leverage GroundingDINO to remove those inaccurate annotated boxes labeled as \\\"crowd\\\" and remove images that lack sufficient complexity or richness of content~(such as containing less than five objects). We will update the paper to make it clearer.\\n\\n> 4. Lack of comprehensive survey of hallucination on Large Vision-Language Models. [1] Object hallucination in image captioning [2] Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption [3] FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models [4] Analyzing and mitigating object hallucination in large vision-language models [5] FGAIF: Aligning Large Vision-Language Models with Fine-grained AI Feedback\\n\\nThank you for your suggestion. We will review the suggested work carefully in the Related Work section in the updated paper.\\n\\n> 5. The proposed LLM-free hallucination benchmark does not offer significant advantages, as the approach still requires various tools, LVLMs, and manual verification, leading to low efficiency.\\n\\nWe would like to clarify that no additional tools, LVLMs, or human verification are required during the evaluation process. These costs are only involved while constructing the LongHalQA benchmark itself. For evaluation, we directly test models using binary (yes/no) or multiple-choice questions. As illustrated in Figure 3, our LongHalQA provides far more efficient evaluation than generative benchmarks that require waiting for MLLMs to generate content.\"}", "{\"comment\": \"We adopt the following process for generative evaluation of MLLM hallucination with the help of GPT in the above comment.\\n\\n```\\nFor each MLLM, we input the prompt: \\\"Describe the given image in detail.\\\" We then feed the image, the revised description from LongHalQA, and object annotations from Objects365 to GPT\\uff0cand ask GPT to score the MLLMs generated descriptions in terms of detail level and hallucination degree. Both scores range from 0 to 10, with higher scores indicating better performance(more detailed and less hallucination content).\\n```\\n\\n> 2. Verification of the benchmark\\n\\nTo further validate the benchmark, we randomly selected 100 questions each from the discrimination and completion tasks across three data types in LongHalQA. We then had human evaluators answer these questions, and the results are shown in the table below. Human evaluators achieve an accuracy of around 90% on most tasks, highlighting that there is still significant room for addressing MLLM hallucinations.\\n\\nIn addition, Table 8 in the paper, along with the comparisons in the above comment for the comparison with the generative evaluation using GPT, could further demonstrate the effectiveness of our benchmark.\\n\\nTable 1. Human verification of the LongHalQA benchmark. We randomly sample 100 questions from each task and data format and ask human evaluators to answer the questions. ''Object'', ''Desc.'' and ''Conv.'' denote object description, detailed image description, and multi-round conversation, respectively. \\n| | Hall. | Discrimination | | | Hall. | Completion |\\n|:------------------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:---------:|:-----------:|\\n| **Object(Binary)** | **Desc.(Binary)** | **Conv.(Binary)** | **Desc.(MCQ)** | **Conv.(MCQ)** | **Desc.** | **Conv.** |\\n| 93% | 89% | 82% | 92% | 89% | 84% | 89% |\"}", "{\"title\": \"Looking forward to further discussion\", \"comment\": \"We sincerely appreciate your insightful and valuable comments. Regarding the robustness of evaluating hallucination-mitigation methods, we would like to clarify that our LongHalQA robustly demonstrates their improvements in mitigating hallucinations, as shown in the tables in our latest two responses. The non-robust results are from our comparative experiments under the free-form text completion task (affected by model preference and instruction-following capabilities), not from our LongHalQA evaluation itself. We have also supplemented comparisons of free-form generation and our MCQ approach on a more general image description task. Our MCQ and the free-generation setting demonstrate consistent improvements from these hallucination-mitigation methods.\\n\\nAs the discussion phase is nearing its end, we kindly ask for a few minutes of your time to check our responses. If our responses have clarified your concerns, we hope you might consider raising your evaluation of our paper. We look forward to hearing from you about any further feedback or suggestions for improving our work.\"}" ] }
50cmx4SrkM
Bayesian Analysis of Combinatorial Gaussian Process Bandits
[ "Jack Sandberg", "Niklas Åkerblom", "Morteza Haghir Chehreghani" ]
We consider the combinatorial volatile Gaussian process (GP) semi-bandit problem. Each round, an agent is provided a set of available base arms and must select a subset of them to maximize the long-term cumulative reward. We study the Bayesian setting and provide novel Bayesian cumulative regret bounds for three GP-based algorithms: GP-UCB, GP-BayesUCB and GP-TS. Our bounds extend previous results for GP-UCB and GP-TS to the \emph{infinite}, \emph{volatile} and \emph{combinatorial} setting, and to the best of our knowledge, we provide the first regret bound for GP-BayesUCB. Volatile arms encompass other widely considered bandit problems such as contextual bandits. Furthermore, we employ our framework to address the challenging real-world problem of online energy-efficient navigation, where we demonstrate its effectiveness compared to the alternatives.
[ "Multi-armed bandits", "Combinatorial bandits", "Contextual bandits", "Gaussian processes", "Energy-efficient navigation" ]
Accept (Poster)
https://openreview.net/pdf?id=50cmx4SrkM
https://openreview.net/forum?id=50cmx4SrkM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yAvuhJRQMA", "u885YE38XO", "onxGYKp3XL", "nkjnwwAI9g", "lT01lEybsx", "hqeVkiJNCR", "efsHI4ftAD", "YpKfnU6BwY", "VMjaF6futt", "VH01lBGBMT", "PDmNTby0Iu", "McKDQXV4CB", "M3be81hOwM", "4lz1UwPsCB" ], "note_type": [ "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730738642104, 1734968048944, 1731923039181, 1730877242565, 1731924042037, 1732801186435, 1731920574976, 1737523391816, 1731924470175, 1730211542546, 1732686959203, 1731922271778, 1730720346498, 1731922231630 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission354/Reviewer_2DSA" ], [ "ICLR.cc/2025/Conference/Submission354/Area_Chair_46ef" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Submission354/Reviewer_mCBV" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Submission354/Reviewer_8zLg" ], [ "ICLR.cc/2025/Conference/Submission354/Reviewer_WnA6" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ], [ "ICLR.cc/2025/Conference/Submission354/Reviewer_WnA6" ], [ "ICLR.cc/2025/Conference/Submission354/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigates the combinatorial volatile Gaussian process (GP) semi-bandit problem and provides the first Bayesian regret bounds for the GP-BayesUCB algorithm. In addition to this novel contribution, the authors extend their theoretical analysis to include Bayesian cumulative regret bounds for the GP-UCB and GP-TS algorithms, effectively addressing a notable research gap as highlighted in Table 1. To demonstrate the practical relevance of their framework, the authors apply their methods to a real-world problem: online energy-efficient navigation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tClear and Structured Presentation:\\nThe paper is well-written, with clear explanations and illustrations of the research gaps. The novelty of this work is effectively communicated, making it accessible even to readers who may not be deeply familiar with the field.\\n2.\\tSolid Theoretical Contributions:\\nThe authors provide rigorous theoretical analysis and establish new Bayesian regret bounds for multiple algorithms, including GP-BayesUCB, GP-UCB, and GP-TS. The paper addresses a significant gap in the literature by formalizing regret bounds for these settings. Full proofs are provided in the appendices, showcasing the depth of their analysis (though the correctness of these proofs was not verified).\\n3.\\tPractical Application:\\nThe real-world application of their framework to online energy-efficient navigation is both relevant and interesting. It demonstrates the practical utility of their theoretical advancements and highlights the potential for real-world impact.\", \"weaknesses\": \"1.\\tLack of Discussion on Theoretical Challenges:\\nWhile the paper provides new theoretical results, it does not clearly articulate the specific challenges encountered in deriving these results for GP-BayesUCB, GP-UCB, and GP-TS. A discussion on the theoretical hurdles and how they were addressed would provide valuable insight into the novelty and difficulty of these contributions.\\n2.\\tReproducibility Concerns:\\nNo code is provided for the experiments. This absence raises concerns about the reproducibility of the empirical results.\", \"questions\": \"1.\\tConnection Between Theory and Empirical Results:\\nThe online energy-efficient navigation application is a compelling demonstration of the framework\\u2019s practical utility. However, it would be helpful to clarify how the empirical results relate to the theoretical findings. Specifically, can the empirical results be used to verify or illustrate key observations from the theoretical analysis? If this connection is not direct, could you design controlled simulated experiments that more explicitly validate the theoretical regret bounds or insights?\\n2.\\tExtended Comparison in Table 1:\\nIncluding the regret rates alongside the regret bounds in Table 1 would greatly enhance its utility. This addition would allow readers to quickly compare the performance of different algorithms in terms of their theoretical guarantees. An extended table with this information would provide a clearer overview of the contributions and situate the work more firmly within the existing literature.\\n3.\\tDiscussion of Theoretical Challenges:\\nAs mentioned in the weaknesses, a dedicated section or paragraph discussing the theoretical challenges faced in deriving the regret bounds for GP-BayesUCB, GP-UCB, and GP-TS would add significant value. This discussion could cover aspects such as handling the volatility in combinatorial settings, managing the complexities introduced by semi-bandit feedback, or other technical hurdles specific to these algorithms.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This is a borderline paper with one reviewer being very positive and three reviewers somewhat critical. After having had a read through the reviews and discussions I believe that the criticisms are mostly about minor issues like the presentation of results. The paper presents useful results around Gaussian process semi-bandit problems and applies these techniques to a real-world problem. I believe that these contributions are valuable and worth being published.\", \"additional_comments_on_reviewer_discussion\": \"There was some interaction between the reviewers and the authors but this interaction didn't change the opinions of the reviewers.\"}", "{\"comment\": \"We thank **reviewer 2DSA** for their review and feedback. We are happy to hear that the reviewer appreciates our theoretical contributions and its practical application. Below, we address the points raised by the reviewer.\\n\\n**W1 Lack of Discussion on Theoretical Challenges**\\n\\nSee **Q3**.\\n\\n**W2 Reproducibility Concerns: No code is provided for the experiments. This absence raises concerns about the reproducibility of the empirical results.**\\n\\nThe code will be made publicly available for the camera-ready version. Our experiments relies only upon openly accessible data and we have provided references on how to access it. In addition, the data processing and algorithms used are detailed in the main text and supplementary material.\\n\\n**Q1 Connection Between Theory and Empirical Results: The online energy-efficient navigation application is a compelling demonstration of the framework\\u2019s practical utility. However, it would be helpful to clarify how the empirical results relate to the theoretical findings. Specifically, can the empirical results be used to verify or illustrate key observations from the theoretical analysis? If this connection is not direct, could you design controlled simulated experiments that more explicitly validate the theoretical regret bounds or insights?**\\n\\nYes, the empirical results can be used to verify or illustrate key observations from the theoretical analysis. First, as the theory suggests and the results in Figure 1 validate, the algorithms obtain sublinear regret w.r.t $T$.\\n\\nSecond, our theoretical analysis shows that the GP-BayesUCB provides more flexible parameters than GP-UCB. It is generally accepted that GP-UCB algorithms tends to overexplore due to too large theoretical confidence intervals. In the right column of Figure 3, we show that the confidence parameter $\\\\beta_t$ of GP-BayesUCB is lower than that of GP-UCB, and additionally we can tune it to be even lower. In practice, (left and middle column of Figure 3), we also show experimentally that GP-BayesUCB obtains lower regret than GP-UCB.\\n\\nThird, as discussed by Russo \\\\& Roy (2014), the performance of UCB algorithms depend on designing tight confidence bounds whereas the regret of Thompson sampling algorithms (using Russo \\\\& Roy's framework) can be bounded by any set of confidence bounds. For complex bandit settings, designing tight confidence bounds can be significantly harder. Whilst our theoretical results suggest that GP-TS should perform similar to GP-UCB and GP-BayesUCB, our experiments demonstrate that GP-TS obtains significantly lower regret which can likely be attributed to the point raised by Russo \\\\& Roy. This finding is also consistent with other works for GP and non-GP bandits.\\n\\n**Q2 Extended Comparison in Table 1: Including the regret rates alongside the regret bounds in Table 1 would greatly enhance its utility. This addition would allow readers to quickly compare the performance of different algorithms in terms of their theoretical guarantees. An extended table with this information would provide a clearer overview of the contributions and situate the work more firmly within the existing literature.**\\n\\nThank you for the suggestion, we have updated Table 1 to include the regret bounds of previous work.\\n\\n**Q3 Discussion of Theoretical Challenges: As mentioned in the weaknesses, a dedicated section or paragraph discussing the theoretical challenges faced in deriving the regret bounds for GP-BayesUCB, GP-UCB, and GP-TS would add significant value. This discussion could cover aspects such as handling the volatility in combinatorial settings, managing the complexities introduced by semi-bandit feedback, or other technical hurdles specific to these algorithms.**\\n\\nIn section 3.1 and 3.2, note that we do discuss the theoretical challenges encountered although we frame it in terms of our technical contributions. Prior to introducing Lemma 3.2 respectively 3.5, we discuss the limits of the analysis from previous work and what we do to overcome it. For example in section 3.1, we describe how $\\\\mathbb{E}[U_t(\\\\mathbf{a}\\\\_{t})-f(\\\\mathbf{a}\\\\_{t})]$ must be bound differently compared to the standard case of GP-UCB in Russo \\\\& Roy (2014) due to the inverse error function $\\\\text{erf}^{-1}(u)$ used by GP-BayesUCB or, in section 3.2, we point out that the volatile setting invalidates a key step in the proof of Takeno et al. (2023) and that we overcome that by analyzing the discretization error of $U_t([\\\\mathbf{a}]_{\\\\mathcal{D}_t}) - U_t(\\\\mathbf{a})$.\"}", "{\"summary\": \"The authors derive Bayesian regret bounds for various algorithms applied to combinatorial volatile GP semi-bandit problems. Specifically, the authors derive regret bounds for 3 algorithms: GP-UCB, GP-BayesUCB, and GP-Thompson Sampling. In comparison to previous works, this is the first regret bound for GP-Bayes UCB, and in addition, extend the existing regret bounds for GP-UCB and GP-TS to infinite, volatile and combinatorial setting (which is also includes the popular contextual bandit setting).\\n\\nThe authors apply these algorithms to the problem of online energy-efficient navigation to demonstrate the performance of the various algorithms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The main strengths of the paper is the theory. I do believe the GP semi-bandit problems considered in this paper are important, and having regret bounds for the algorithms discussed in this paper is also useful.\\n\\nSpecifically, it is nice to see sub-linear regret bound for all three algorithms.\\n\\nFurthermore, I also believe that the general techniques developed here may be useful to derive regret bounds for other bandit settings.\", \"weaknesses\": \"1. I think the paper lacks some clarity, and the exposition can improve significantly. For example, it requires recalling previous literature to properly understand the set-up in Section 2.1: Is A a finite set? 2^A is the set of a all subsets of A? What happens when A is infinite as in Section 3.2?\\n2. Though the dependency on T is sub-linear, I am not sure how to view the dependency on K. Especially in the infinite case. Are there any lower bounds for these settings? It is hard to view how good or bad the bounds are with lack of comparisons.\\n3. Building on top of 2 above, I am curious to know if this is the best dependency on T you can get. I am used to seeing \\\\sqrt{T} regret bounds for bandit algorithms -- is this not achievable in such settings?\\n4. I thought that the experimental section was too artificial. If the motivation is to solve the problem in best possible way, there are probably better ways of solving the problem (for example using RL), than naively applying the semi-bandit learning algorithms. If the point is to show the performance of various algorithms, a simple example would suffice. In my opinion, the addition of these experiments does not add any additional value to the paper, and does not change the fact that the papers main (only) contributions are the theoretical bounds.\", \"questions\": \"Please respond to my above concerns.\\n\\nIn addition, I would request the authors to add theorems / propositions after Theorems 3.2 and 3.6, without any \\\\gamma_t and \\\\beta_t terms. Or more generally, with as few variables as possible.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank **reviewer WnA6** for their review and examination of our work. The reviewer is not convinced that we properly consider the volatile case however we assert that **we do consider the volatile case** and we hope the reviewer will engage in further discussions so that any confusion can be clarified.\\n\\n**W1: Though the work claims to present the bounds for volatile case but the proof for the bounds do not seem to consider it. As an example what would happen when the best arm is not present among the observed arms?**\\n\\nThe volatile case is considered for all the proofs in the paper. Note that the best arm, $\\\\mathbf{a}_t^*$, is defined as the best arm among the available arms (see line 140) and we use the subscript $t$ to indicate that it varies over time. For clarification, could you point to a specific equation, lemma or theorem where the volatile case is not considered?\\n\\n**W2: Not significant contribution, the paper mainly builds on the works of Russo & Roy 2014, Srinivas et al 2012 and Takeno et al 2023, where in to compute the Bayesian regret one only needs to compute the expectation over the high probability regret bounds given by the above works.**\\n\\nThe Bayesian regret cannot simply be computed by taking the expectation over the high-probability bounds given by Srinivas et al. (2012), Russo \\\\& Roy (2014), and Takeno et al. (2023). In fact, neither Russo \\\\& Roy nor Takeno et al. provide high probability bounds, they both provide Bayesian regret bounds. As we state in the introduction, our contribution is to extend existing Bayesian regret bounds for GP-bandits to new and practically important settings: the infinite, volatile, and combinatorial settings. Additionally, we introduce the first regret bound for GP-BayesUCB (which was introduced by Nuara et al., 2018). Note that previous results only held for finite volatile (Russo \\\\& Roy, 2014) or infinite non-volatile settings (Srinivas et al., 2012, Takeno et al., 2023, 2024, Kandasamy et al., 2018). The closest similar work to ours (Nika et al., 2022) provide frequentist bounds for GP-UCB in the same setting but we establish Bayesian regret bounds in a unified manner for GP-UCB, GP-TS and GP-BUCB.\\n\\nFor the finite case, we adapt the results of Chang et al. (2011) to bound the inverse error function $\\\\text{erf}^{-1}(u)$ (Lemma 3.1) and obtain regret bounds for GP-BUCB with more flexible parameters compared to GP-UCB. We provide a demonstration in our experiments of how the choice of parameters can lower the regret but maintain theoretical guarantees. Our technical contribution for the infinite setting is to provide an analysis of the discretization error $U_t([\\\\mathbf{a}]_{\\\\mathcal{D}_t}) - U_t(\\\\mathbf{a})$ (Lemma 3.5) which is the key step to allow volatile arms.\\n\\n**W3: Lemma 3.1 the results are considered for different regimes of horizons for different cases of the ratio, why not choose the limits as 1 to T for the 3rd case, wouldn't that be a tighter bound?** \\n\\nTaking the limits $[1,T]$ instead of $[1,\\\\infty]$ does not yield a meaningfully tighter bound. On page 16, line 846, we would obtain an additional term ($\\\\propto t^{1-\\\\xi/\\\\omega}$) that would go to zero as $T \\\\rightarrow{} \\\\infty$ . Note also that in the equivalent lemma for GP-UCB and GP-TS (Lemma A.2) we also take the limit $T \\\\rightarrow{} \\\\infty$ to obtain a constant bound, see lines 783-786. By doing the same in Lemma 3.1, we are consistent with the other algorithms and previous work.\"}", "{\"title\": \"Reply to reviewer WnA6\", \"comment\": \"**W1**\\n> \\\"All the results related to the bounds stated in the paper would still hold if the case was non-volatile.\\\"\\n\\nYes, this is because the case $\\\\mathcal{A}_t=\\\\mathcal{A}$ is a special case of the volatile setting $\\\\mathcal{A}_t \\\\subseteq \\\\mathcal{A}$. In the infinite case, we require the number of discretization points $\\\\tau_t$ to satisfy additional inequalities (Eq (6a-c) in Assumption 3.5) that are not necessary for the non-volatile case. The confidence parameter $\\\\beta_t$ ($\\\\propto \\\\log \\\\tau_t$) is therefore larger in the volatile case than would be necessary in the non-volatile case. Our bounds do consider the volatile case as evident from them containing $\\\\beta_T$.\\n\\n> \\\"It would be better to elaborate on what kind of volatility is being considered by assigning a certain distribution on the set of arms being observed or not observed at any time t. I think such a consideration would have an significant impact on the results and would be tuned more towards the volatile arms.\\\"\\n\\nOur results hold for any adversarial or random selection of the available base arms $\\\\mathcal{A}_t$ and available super arms $\\\\mathcal{S}_t$. Assigning a specific distribution would detract from the generality of our results.\\n\\n> \\\"Additionally as the question is stated - let's say an optimal arm is not observed during the initial set of would that not account for a constant regret of the true optimal - observed optimal? which is not reflected in the bounds.\\\"\\n\\n**This does not need to be reflected in the bounds.** To clarify the terminology, the set of *feasible* and *available* super arms at time $t$ is $\\\\mathcal{S}\\\\_t \\\\subset 2^{\\\\mathcal{A}\\\\_{t}}$ where $\\\\mathcal{A}\\\\_t$ is the *available* base arms at time $t$. Again, we reiterate that the optimal super arm at time $t$ is defined as $\\\\mathbf{a}^*\\\\_t=\\\\arg\\\\max_{\\\\mathbf{a} \\\\in \\\\mathcal{S}\\\\_{t}} \\\\sum_{a \\\\in \\\\mathbf{a}} f(a)$ since the agent cannot select a better arm than $\\\\mathbf{a}_t^*$ at time $t$. **Note that** $\\\\mathbf{a}^*\\\\_{t}$ **varies over time in the volatile arm setting as indexed by $t$**. Hence, the agent can always obtain an instant regret of $0$. Defining the optimal arm as the best available arm is standard in volatile bandits, see Qin et al. (2014), Russo \\\\& Roy (2014), Chen et al. (2018) and Nika et al. (2020; 2022). As a comparison, contextual bandit problems always compare against the best arm *given the current context* - which is equivalent to the best available arm for volatile bandits.\\n\\n- We would like to revisit the reviewer's original claim in weakness 1: \\\"... the proof for the bounds do not seem to consider [the volatile case]\\\". We still assert that **we do consider the volatile case in both the proofs and bounds.** After considering the points we raise, **does the reviewer still stand by this claim?**\\n\\n**W2**\\n> \\\"The paper does not show a significant contribution because the over all reward considered at any time t is sum of the rewards of each individual arm and ... The only novel contribution is due to Lemma A9 and A.12 which brings in the factor of largest eigenvalue of all the possible covariance matrices.\\\"\\n\\nLinear rewards are not uncommon in the literature, see Kveton et al. (2015), Wen et al. (2015), Russo & Roy (2016), and \\u00c5kerblom et al. (2023). As for contributions beyond the combinatorial setting, we establish the first regret bound for GP-BayesUCB. In addition, for GP-TS and GP-UCB, we fill the unaddressed gap of infinite and volatile GP-bandits. \\n\\nKveton et al. \\\"Tight Regret Bounds for Stochastic Combinatorial Semi-Bandits.\\\" Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 2015.\\n\\nCombes et al. \\\"Combinatorial Bandits Revisited.\\\" Advances in Neural Information Processing Systems 28, 2015.\\n\\nWen et al. \\\"Efficient Learning in Large-Scale Combinatorial Semi-Bandits.\\\" Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1113-1122, 2015.\\n\\nRusso & Roy. \\\"An Information-Theoretic Analysis of Thompson Sampling.\\\" Journal of Machine Learning Research 17, no. 68 (2016).\\n\\n> \\\"... which would be similar to considering a single arm bandit frame work with the set of observed arms varying after every K iterations and all the results by previous work would just follow.\\\"\\n\\n**This hypothetical solution would not work**, as one would need to account for the delayed feedback. Even if one did account for the delayed feedback, there are no previous Bayesian regret bounds considering the infinite and volatile setting.\\n\\n**W3**\\n\\n**No, this is incorrect.** The same argument cannot be applied to case one where $\\\\xi/\\\\omega < 1$ as $\\\\lim_{T \\\\rightarrow \\\\infty} T^{1 - \\\\xi / \\\\omega} = \\\\infty$. Letting $T \\\\xrightarrow{} \\\\infty$ for bounded right hand sides is standard procedure within the literature. We encourage the reviewer to compare against the proof of Theorem 2 in Srinivas et al. (2012), Lemma 2 in Russo \\\\& Roy (2014), Theorem 11 in Kandasamy et al. (2018), Theorem B.1 (Eq. (5)) in Takeno et al. (2023).\"}", "{\"title\": \"Summary of changes made based on reviewer feedback\", \"comment\": [\"Text marked with ${\\\\color{blue}\\\\text{blue}}$ has been added and text marked with ${\\\\color{red}\\\\text{red}}$ will be removed.\", \"Added comparison of regret to Table 1, as suggested by reviewer 2DSA.\", \"Added clarifications to section 2.1 regarding the base arms $\\\\mathcal{A}$ and the available base arms $\\\\mathcal{A}_t$ based on feedback from reviewer mCBV.\", \"Added discussion of lower bound in terms of $K$ and $T$ to section 3.1 based on feedback from reviewer mCBV.\", \"To make space for the above changes, Figure 4 was removed from the main paper and are instead presented in Figure 5 in the appendix. Note that the results in the now removed Figure 4 was a subset of Figure 5 (previously Figure 6).\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank **reviewer 8zLg** for their review and feedback. We are pleased to hear that the reviewer finds value in our experimental setup and greatly appreciates our writing. Below, we address the points raised by the reviewer.\\n\\n**W1: No synthetic data experiment. Not even in the supplementary material. In my opinion, synthetic data experiments can significantly add to the development of intuitions about the framework. Also since you have much more control over the creation of the data, it can reveal interesting properties of the framework [in comparison with state-of-the-art].**\\n\\nAll our experiments are performed in a controlled setting by sampling the expected reward from a Gaussian process and the noise from i.i.d. zero-mean Gaussians. The mean, kernel and noise parameters are based on realistic values given by the deterministic energy model in Eq. (8). The graph and its corresponding edge features are also based on the real-world road networks of Luxembourg and Monaco. Note that we could have set the parameters and the graph arbitrarily. By using realistic values, the obtained regrets can be interpreted in terms of the energy saved and we do not lose much, if any, control over the data creation. Additionally, this still corresponds to a synthetic data generation process.\\n\\nIn the experiments, we study interesting properties of the algorithms, such as the impact of the BayesUCB and UCB parameter values. We find that GP-BayesUCB has parameters that can be tuned to lower the regret whilst maintaining theoretical guarantees, unlike GP-UCB. Additionally, we vary the lengthscale parameter in the GP prior and demonstrate the difference between our method and previous work is most pronounced with longer lengthscales since it simplifies the problem. See our answer to reviewer 2DSA for further discussions on the connection between theory and empirical results.\\n\\n**W2: Also, I could not find an experiment with the horizon more than 500 rounds. I am curious about the performance of the frameworks as the horizon goes well beyond T=500. I believe that proper comparison of bandit frameworks [most of the times] comes with running the experiments for long horizons.**\\n\\nFor any bandit experiments on a limited budget, there is a tradeoff between the number of experiments, samples and horizon length. Note that even with $T=500$, the best-performing method (GP-TS) usually reaches saturation. The main purpose of introducing GP-methods to the energy-efficient navigation problem was to improve the sample efficiency compared to \\u00c5kerblom et al. (2023) by taking advantage of correlations. Therefore, we chose to provide more experiments rather than longer experiments to demonstrate that the superior sample efficiency holds across different routes, networks, UCB parameter values and lengthscales.\\n\\n**W3: I did not notice any discussion in the paper about possible extensions and future directions and further impacts of their research.**\\n\\nSee **Q2**.\\n\\n\\n**Q1: Does the type of directed graph affect the applicability of the framework? For instance, how does the graph [being cyclic or acyclic] affect the performance of the framework?**\\n\\nIn general, the type of directed graph should not affect the applicability of the overall framework and its theoretical analysis. However, given some domain knowledge on the type of graph one may tailor the steps in Algorithm 1 to improve the performance. For example, given an acyclic graph one does not need to guarantee that the weights are positive and one could therefore replace Dijkstra's algorithm with the Bellman-Ford algorithm for small enough graphs.\\n\\n**Q2: I did not notice any discussion in the paper about possible extensions and future directions and further impacts of their research, not even in the supplamentary section. Why? Can you please clarify?**\\n\\nAn important future direction of this research is to study the multi-agent setting where the vehicle can learn from observations given by a fleet of vehicles. The methods we introduce should be able to learn from fewer vehicles since by taking correlations into account, we do not need to explore the entire network. Our motivation for considering the online energy-efficient navigation problem is to extend the effective range of electric vehicles by identifying efficient routes quickly. We hope that our work can lead to reduced range anxiety among consumers and speed up the transition to a sustainable transportation system.\\n\\nWe will try to add an additional section to the supplementary material before the discussion phase ends.\"}", "{\"summary\": \"The paper studies Gaussian process bandits in the contextual volatile semi-bandit setting. The contribution of the paper is mainly theoretical as it provides novel Bayesian regret bounds for previously designed algorithms. In addition, there is an interesting application of their framework in online energy efficient navigation. This experimental application builds on top of the previously designed experiment of the same application in bandit papers.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper is building on top of other previously published frameworks, however, it is not a straightforward extension of the previous works.\\n\\nThe experiments (application of their framework) in online energy-efficient navigation problem seem to have added some novelties and value to the paper. \\n\\nThe paper is written in an excellent way. The explanations for the most important parts of the algorithms are clear. Also the similarities and differences (novelties) of their framework in comparison to the state-of-the-art is clarified properly.\", \"weaknesses\": \"No synthetic data experiment. Not even in the supplementary material. In my opinion, synthetic data experiments can significantly add to the development of intuitions about the framework. Also since you have much more control over the creation of the data, it can reveal interesting properties of the framework [in comparison with state-of-the-art].\\n\\nAlso, I could not find an experiment with the horizon more than 500 rounds. I am curious about the performance of the frameworks as the horizon goes well beyond T=500. I believe that proper comparison of bandit frameworks [most of the times] comes with running the experiments for long horizons. \\n\\nI did not notice any discussion in the paper about possible extensions and future directions and further impacts of their research.\", \"questions\": \"Does the type of directed graph affect the applicability of the framework? For instance, how does the graph [being cyclic or acyclic] affect the performance of the framework?\\n\\nI did not notice any discussion in the paper about possible extensions and future directions and further impacts of their research, not even in the supplamentary section. Why? Can you please clarify?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Feedback\", \"comment\": \"1. All the results related to the bounds stated in the paper would still hold if the case was non-volatile. It would be better to elaborate on what kind of volatility is being considered by assigning a certain distribution on the set of arms being observed or not observed at any time t. I think such a consideration would have an significant impact on the results and would be tuned more towards the volatile arms. Additionally as the question is stated - let's say an optimal arm is not observed during the initial set of would that not account for a constant regret of the true optimal - observed optimal? which is not reflected in the bounds.\\n\\n2. The paper does not show a significant contribution because the over all reward considered at any time t is sum of the rewards of each individual arm and which would be similar to considering a single arm bandit frame work with the set of observed arms varying after every K iterations and all the results by previous work would just follow. The only novel contribution is due to Lemma A9 and A.12 which brings in the factor of largest eigenvalue of all the possible covariance matrices.\\n\\n3. The same argument can be applied to the case one where the as T tends to $\\\\infty$ to case one and which would lead to a constant value? Additionally, The LHS of the bound does not say that it is considering the case of asymptotic bound.\\n\\nThank you for the response to the questions. I will keep my score.\"}", "{\"comment\": \"**W4: I thought that the experimental section was too artificial. If the motivation is to solve the problem in best possible way, there are probably better ways of solving the problem (for example using RL), than naively applying the semi-bandit learning algorithms. If the point is to show the performance of various algorithms, a simple example would suffice. In my opinion, the addition of these experiments does not add any additional value to the paper, and does not change the fact that the papers main (only) contributions are the theoretical bounds.**\\n\\nWhilst the combinatorial semi-bandit problem can be seen as simply a special case of a reinforcement learning (RL) problem, RL formulations introduce unnecessary complexities (e.g. learning state transitions, transition-dependent rewards) that worsen the sample efficiency significantly. As an example, the actions selected by the agent will affect the traffic environment but those changes are unlikely to affect the agent itself since any edge is traversed at most once during an episode. For the problem of route and charging station selection, semi-bandit algorithms (\\u00c5kerblom et al., 2023a) have been demonstrated to scale up to graphs with orders of magnitude more nodes and edges compared to deep RL algorithms (Lee et al., 2020; Qian et al., 2019).\\n\\n\\u00c5kerblom et al. (2023b) introduce a semi-bandit framework that combines informed priors with efficient exploration at a low computational cost. Compared to \\u00c5kerblom et al. (2023), our formulation further improves the exploration and sample efficiency by utilizing the correlations (given by the GP) between edge weights. Doing the same for a RL problem would not be as straightforward and would lack the proven strong performance guarantees given by bandit methods.\\n\\nLee et al. \\\"Deep reinforcement learning based optimal route and charging station selection.\\\" Energies 13.23 (2020): 6255.\\n\\nQian et al. \\\"Deep reinforcement learning for EV charging navigation by coordinating smart grid and intelligent transportation system.\\\" IEEE transactions on smart grid 11.2 (2019): 1714-1723.\\n\\n\\u00c5kerblom et al. \\\"A Combinatorial Semi-Bandit Approach to Charging Station Selection for Electric Vehicles.\\\" Transactions on Machine Learning Research (2023a).\\n\\n\\u00c5kerblom et al. \\\"Online learning of energy consumption for navigation of electric vehicles\\\". Artificial Intelligence, 317:103879, (2023b)\\n\\n**Q1: In addition, I would request the authors to add theorems / propositions after Theorems 3.2 and 3.6, without any $\\\\gamma_t$ and $\\\\beta_t$ terms. Or more generally, with as few variables as possible.**\\n\\nFor the squared-exponential kernel and ignoring logarithmic terms we get $\\\\text{BR}(T) = \\\\mathcal{O}(K \\\\sqrt{T})$. Note that $\\\\gamma_T$ and $\\\\beta_T$ are standard terms that appear in most regret bounds in the GP-bandit literature, see Srinivas et al. (2012). Additionally, in the paragraphs below Thm 3.2 and 3.6, we highlight the order of complexity that we obtain and compare the expression we get against previous works.\\n\\nBy suggestion from reviewer 2DSA, we have updated Table 1 to include our regret bounds and those from previous works. The included regret bounds are simplified compared to those in Thms 3.2 and 3.6.\"}", "{\"summary\": \"The paper claims to present a novel Bayesian regret bounds for GP-UCB and GP-TS in combinatorial, volatile and infinite arms setting. Further they present the experimental results for a real world application of online energy efficient navigation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper provides the bounds for the Bayesian regret for GP-BUCB, GP-UCB and GP-TS.\", \"weaknesses\": \"1. Though the work claims to present the bounds for volatile case but the proof for the bounds do not seem to consider it. As an example what would happen when the best arm is not present among the observed arms?\\n2. Not significant contribution, the paper mainly builds on the works of Russo & Roy 2014, Srinivas et al 2012 and Takeno et al 2023, where in to compute the Bayesian regret one only needs to compute the expectation over the high probability regret bounds given by the above works.\\n3. Lemma 3.1 the results are considered for different regimes of horizons for different cases of the ratio, why not choose the limits as 1 to T for the 3rd case, wouldn't that be a tighter bound?\", \"questions\": \"See the weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank **reviewer mCBV** for their review and feedback. We are pleased to hear that the reviewer finds our theoretical regret bounds useful. Below, we address the points raised by the reviewer.\\n\\n**W1: I think the paper lacks some clarity, and the exposition can improve significantly. For example, it requires recalling previous literature to properly understand the set-up in Section 2.1: Is A a finite set? 2^A is the set of a all subsets of A? What happens when A is infinite as in Section 3.2?**\\n\\n$\\\\mathcal{A}$ can be either a finite or infinite set, in section 3.1 we consider $|\\\\mathcal{A}| < \\\\infty$ and in section 3.2 we consider $|\\\\mathcal{A}| = \\\\infty$. (We can update the text for sec. 3.2 to state $|\\\\mathcal{A}| = \\\\infty$. Currently we only say it in words.)\\n\\nYes, we use $2^{\\\\mathcal{A}}$ to denote the power set. This is fairly standard notation but a clarifying sentence will be added.\\n\\nThe combinatorial structure in our problem complicates the situation when $\\\\mathcal{A}$ is infinite and it is worth clarifying further. The agent must select a combination of base arms but may not pick the same arm twice. To prevent issues with limit points, we therefore restrict the available base arms at round $t$ , $\\\\mathcal{A}_t$ , to be finite. Even though $\\\\mathcal{A}_t$ is finite, whether $\\\\mathcal{A}$ is finite or infinite has large implications for contextual bandits. In the navigation problem, time-varying features (such as congestion, time of day or outside temperature) can only be modelled as continuous features if $\\\\mathcal{A} = \\\\infty$ but would be limited to discretized features if $\\\\mathcal{A} < \\\\infty$.\\n\\n**W2: Though the dependency on T is sub-linear, I am not sure how to view the dependency on K. Especially in the infinite case. Are there any lower bounds for these settings? It is hard to view how good or bad the bounds are with lack of comparisons.**:\\n\\nWe have added additional comparisons regarding the dependency on $K$ to section 3.1.\\n\\nRegarding comparisons w.r.t. $K$, we do compare against the closest work by Nika et al. (2022) and obtain the same dependency. For a linear kernel, one could also compare against Wen et al. (2015) where a linear dependency on $K$ is obtained for CombLinTS and CombLinUCB although their setting may differ slightly from ours. Similar to us, their regret bound depends only on $K$ and not on the number of base arms. Therefore, we do not believe the infinite case should change anything about the dependency on $K$. For combinatorial semi-bandits with linear reward functions (but independent amrs), Merlis \\\\& Mannor (2020) obtain a $\\\\Omega(\\\\sqrt{K} \\\\log K)$ lower bound which would suggest a gap of $\\\\sqrt{K} \\\\log K$ for the linear kernel.\\n\\nZheng Wen, Branislav Kveton, Azin Ashkan. Efficient Learning in Large-Scale Combinatorial Semi-Bandits. Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1113-1122, 2015.\\n\\nNadav Merlis and Shie Mannor. Tight Lower Bounds for Combinatorial Multi-Armed Bandits. In Proceedings of Thirty Third Conference on Learning Theory, pp. 2830\\u20132857. PMLR, 2020\\n\\n**W3: Building on top of 2 above, I am curious to know if this is the best dependency on T you can get. I am used to seeing $\\\\sqrt{T}$ regret bounds for bandit algorithms -- is this not achievable in such settings?**\\n\\nWe have added additional discussion about lower bounds w.r.t. $T$ to section 3.1. \\n\\nTo our knowledge, there are no known lower bounds for Bayesian regret of GP-bandit algorithms in general. However, Scarlett et al. (2017) and Cai et al. (2021) discuss lower bounds for non-Bayesian regret. For the SE-kernel, they obtain $R(T) = \\\\Omega(\\\\sqrt{T (\\\\log T)^{d/2}})$ which would indicate that our bounds are tight up to logarithmic factors of $T$. \\n \\nNote that we do compare our dependency on $T$ against the non-combinatorial and non-volatile results obtained by Takeno et al. (2023; 2024) in section 3.1 and 3.2. For the finite case, Takeno et al. obtains a $\\\\mathcal{O}(\\\\sqrt{T \\\\gamma_T})$ bound which is $\\\\mathcal{O}{\\\\sqrt{\\\\log T}}$ tighter than our bound. But for the infinite case our bounds match Takeno et al. For the infinite case, constructing a finite discretization is a standard technique used to prove the regret bounds in the literature but is believed to be the cause of the extra $\\\\mathcal{O}(\\\\sqrt{\\\\log T})$ factor, see discussion in Takeno et al. (2023). We have also updated Table 1 to contain the regret bounds of previous work.\\n \\nXu Cai, Jonathan Scarlett. On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization. Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1216-1226, 2021.\\n\\nJonathan Scarlett. Tight Regret Bounds for Bayesian Optimization in One Dimension. Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4500-4508, 2018.\"}" ] }
50UzaXh0gC
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
[ "Gabriel Kasmi", "Amandine Brunetto", "Thomas FEL", "Jayneel Parekh" ]
Despite the growing use of deep neural networks in safety-critical decision-making, their inherent black-box nature hinders transparency and interpretability. Explainable AI (XAI) methods have thus emerged to understand a model's internal workings, and notably attribution methods also called Saliency maps. Conventional attribution methods typically identify the locations - the where - of significant regions within an input. However, because they overlook the inherent structure of the input data, these methods often fail to interpret what these regions represent in terms of structural components (e.g., textures in images or transients in sounds). Furthermore, existing methods are usually tailored to a single data modality, limiting their generalizability. In this paper, we propose leveraging the wavelet domain as a robust mathematical foundation for attribution. Our approach, the Wavelet Attribution Method (WAM) extends the existing gradient-based feature attributions into the wavelet domain, providing a unified framework for explaining classifiers across images, audio, and 3D shapes. Empirical evaluations demonstrate that WAM matches or surpasses state-of-the-art methods across faithfulness metrics and models in image, audio, and 3D explainability. Finally, we show how our method explains not only the where - the important parts of the input - but also the what - the relevant patterns in terms of structural components.
[ "interpretability", "feature attribution", "wavelet", "images", "audio", "3D shapes" ]
Reject
https://openreview.net/pdf?id=50UzaXh0gC
https://openreview.net/forum?id=50UzaXh0gC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hnRfht47zP", "faGbj1Z93D", "ZdLOEOun2y", "S3faaMFdet", "IoteRazcmv", "19io3kQz9U" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1729483662644, 1737523783320, 1730005653537, 1730663274118, 1734313937851, 1730322180999 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6653/Reviewer_AcnY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6653/Reviewer_kXsu" ], [ "ICLR.cc/2025/Conference/Submission6653/Reviewer_F8CS" ], [ "ICLR.cc/2025/Conference/Submission6653/Area_Chair_J95M" ], [ "ICLR.cc/2025/Conference/Submission6653/Reviewer_AFY7" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors identified a gap in existing attribution methods, specifically their inability to explain the structural components of input data. The authors propose a novel wavelet-based attribution method extending to multiple modalities including images, audio, and 3D shapes. While the topic is timely and the problem addressed is of significant importance, the proposed method lacks a clearly demonstrated advantage in explaining structural components compared to existing techniques. Moreover, the quantitative results do not convincingly showcase the method's superiority. Therefore, I tend to reject this paper in its current version.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The topic addressed in this paper is important, and the proposed method is novel. Current attribution methods struggle to explain and distinguish structural components in the input, and the integration of wavelets into attribution calculations shows promise in addressing this limitation.\\n2. Extensive evaluations are conducted across multiple modalities, including images, audio, and 3D shapes.\\n3. This paper is easy to follow, with detailed descriptions of evaluation metrics and experiments.\", \"weaknesses\": \"1. The primary weakness is the lack of significant differentiation between the proposed attribution method and existing approaches. The paper provides limited analysis or visualizations that convincingly show how WAM offers better hierarchical explanations. While some comparisons are presented (e.g., in Figures 2 and 12), I don\\u2019t see clear and enough advantages in explaining input structures. The authors should further explore or emphasize the distinctive aspects of their method.\\n2. The quantitative results do not consistently demonstrate improvements over existing attribution methods. While comparisons with older methods are acceptable given the novelty of the proposed approach, the proposed method falls significantly behind in several key metrics, such as in Tables 3 and 5 (Appendix). Additionally, the results in Table 1 are concerning; given the definition of the Faithfulness metric in Eq. 9, the output should always be positive. Why, then, are many results in Table 3 reported as zero?\\n3. The organization of the results section could be improved. For example, the discussion of perturbation-based attribution methods in Section 4.2 appears abruptly and feels disconnected from Section 4.1.\", \"questions\": \"1. Could you clarify why many results in Table 3 are zero when using the Faithfulness metric?\\n2. Could you explain why Integrated Gradients perform best on the \\\\mu-Fidelity metric but perform worst in Faithfulness? Is this discrepancy due to different experimental setups?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces the \\u201cWavelet Attribution Method(WAM),\\u201d a feature attribution technique in explanable AI that attiburion is on wavelet domain. WAM leverages the wavelet domain to extend gradient-based feature attributions, preserving the multi-scale structures of the input data. This approach provides a unified framework applicable across various modalities such as images, audio, and 3D shapes. Empirical evaluations demonstrate that WAM matches or surpasses other methods in faithfulness metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-structured, presenting its concepts clearly and in an accessible manner. The theoretical foundations for using gradients in the wavelet domain are well-developed, filling a gap in the current literature where such an approach has not been extensively explored.\\n\\n The paper introduces a novel method by leveraging gradients in the wavelet domain, which provides a new perspective on feature attribution.\\n\\nThe paper's evaluation of the proposed method's faithfulness using multiple faithfulness metrics is thorough and valuable. By comparing the proposed approach across different evaluation criteria, the authors demonstrate the robustness and reliability of their method.\\n\\n The figures and visualizations are well-designed, enhancing the clarity of the paper. They effectively illustrate the principles of the Wavelet Attribution Method (WAM) and provide a clear understanding of how wavelet-based attributions differ from traditional pixel-based methods. The use of visual examples makes the theoretical concepts more accessible and supports the argument for the method\\u2019s efficacy.\", \"weaknesses\": \"The approach used in this paper shares similarities with the WaveletX(Kolek et al, 2022) method, which also performs saliency mapping in the wavelet domain. The primary distinction lies in the use of gradients as a mask in this work, While WaveletX optimize the mask on wavelet domain. However, this difference may not be significant enough to constitute a radical contribution to the field. It would be better to explicitly compare both methods and highlight the novel aspects of WAM.\\n\\nThe paper does not include quantitative assessments for 3D shape analysis and relies solely on qualitative results. Incorporating quantitative metrics would strengthen the evaluation and provide a more comprehensive understanding of the method's performance in this domain.\\n\\nAlthough the authors claim that the Wavelet Attribution Method (WAM) outperforms other approaches across different domains, the results in Table 2 suggest otherwise. Specifically, WAM does not consistently outperform Integrated Gradients, indicating that the performance advantage may not be as significant as claimed.\\n\\nThe experimental comparisons primarily involve methods like Integrated Gradients, GradCAM++, and SmoothGrad, which are not the most recent or best-performing approaches according to the fidelity metric. Including comparisons with more recent and state-of-the-art methods, such as LRP-\\u03b1\\u03b2 (Samek et al., 2016), LayerCAM (Jiang et al., 2021), Guided Backpropagation (Selvaraju et al., 2016), AttnLRP (Achibat et al., 2024), and SRD (Han et al., 2024), would strengthen the evaluation and better demonstrate WAM's superiority.\\n\\nThe paper does not adequately demonstrate how WAM enables an understanding of what features the model uses to make decisions. While the method highlights important wavelets, it does not clarify the specific meaning or relevance of these wavelets to the classification task. For instance, approaches like CRAFT (Concept Recursive Activation FacTorization for Explainability, Fel et al., 2023) offer more explicit explanations by identifying meaningful concepts (e.g., \\\"elephant tusk\\\") that recur across multiple samples. Providing a similar level of interpretability by linking important wavelets to specific semantic features would improve the explanatory power of WAM.\", \"questions\": \"In Figure 2 (d), I don\\u2019t know why decomposing important coefficient at each scale is necessary. Is there specific reason that scaling is important?\\n\\nDoes WAM can discriminate each object when multiple classes are at one image, such as Cat-dog image?\\n\\nWhat is GRADWCAM in figure 5?\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"Potential plagiarism: this paper's approach is almost identical with the paper \\\"Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain\\\" which presented at workshop \\\"XAI in Action: Past, Present, and Future Applications workshop at NeurIPS 2023\\\". I understand that workshop papers are not considered as formal archived publications. However, I have some concerns about potential plagiarism, as it is unclear whether the authors of the current submission are the same as those of the workshop paper.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper present the Wavelet Attribution Method (WAM), which improves gradient-based feature attribution by utilizing the wavelet domain as a comprehensive framework for explaining classifiers across various domains. The findings indicate that WAM meets or surpasses SOTA metrics for faithfulness, effectively identifying not only the locations of significant features but also highlighting their structural patterns.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Developing a multi-domain explanation method is intriguing, and the discussion of key challenges is reasonable.\\n2. The manuscript is well written. It is easy to follow this work.\\n3. The experiments conducted with the proposed methods are adequate.\", \"weaknesses\": \"1. I believe there is a lack of sufficient baselines. It would be helpful to include more options such as LIME, SHAP, and concept-based explanations for image and audio data. Since there is no quantitative evaluation in 3D settings, adding 3D LIME, SHAP, sensitivity analysis, and Layer-wise Relevance Propagation (LRP) for 3D baselines would be a solid starting point.\", \"references\": \"[1] \\\"Why Should I Trust You?\\\": Explaining the Predictions of Any Classifier \\n [2] A Unified Approach to Interpreting Model Predictions \\n [3] Towards Automatic Concept-based Explanations \\n2. The experiments were conducted on only one dataset; therefore, it would be essential to include results from several datasets.\\n3. In the audio results (Figure 1 and Figure 4), it is quite challenging to identify the areas being explained. Making the less important areas grayscale while highlighting the significant areas in red would improve interpretability.\\n4. It would have been better to conduct a human study for the qualitative evaluation. For example, utilizing Amazon Mechanical Turk (MTurk) to ask annotators to evaluate WAM while providing explanations for other baselines would be beneficial.\", \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"This work uses wavelet decompositions and gradient-based attribution techniques to capture location and content of the data used by a classifier to make a decision.\", \"After reading reviews and the author's rebuttal, I see there are some remaining concerns to be addressed:\", \"AFY7: Concerned about the amount of contribution/impact of this work.\", \"kXsu: Remains concerned about the practicality of this work from a user standpoint.\", \"AcnY: Remains concerned about the quality of the experimental evaluation.\", \"F8CS: Decided to keep their score after the author rebuttal without justification.\"], \"additional_comments_on_reviewer_discussion\": \"During the reviewer / AC discussion AFY and AcnY reaffirmed their concerns and thus I cannot support the acceptance of this work.\"}", "{\"summary\": \"The paper presents the Wavelet Attribution Method (WAM), a novel explainability approach for deep learning models in the wavelet domain. Unlike traditional pixel-based attribution methods (saliency maps), WAM leverages the structural benefits of the wavelet domain, offering a more generalizable approach that applies to various data modalities, including images, audio, and 3D shapes. WAM decomposes the model\\u2019s decisions by analyzing how features in the wavelet-transformed space affect predictions. The method integrates gradient-based attribution techniques with wavelet decomposition to capture both where (location) and what (content) aspects of the data structure. Through empirical evaluation, WAM demonstrates superior performance on faithfulness and fidelity metrics for image and audio data, achieving enhanced clarity in the model\\u2019s decision-making process.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"WAM brings an intriguing approach by utilizing wavelet decomposition to improve gradient-based feature attribution methods. The method leverages the mathematical properties of the wavelet domain, potentially addressing limitations of saliency maps that flatten hierarchical and spatial relationships. This could provide meaningful explanations by capturing features across multiple scales. In theory, WAM\\u2019s emphasis on inter-scale dependencies could enhance explainability across images, audio, and 3D shapes, offering an innovative view on XAI. Additionally, by unifying SmoothGrad and Integrated Gradients, WAM capitalizes on established approaches while potentially broadening their applicability across multiple modalities. This multimodal capability, though perhaps overstated, should be a promising generalization that is not commonly found in the comparison methods, which are often restricted to single data domains.\", \"weaknesses\": \"Despite its ambitious goals, WAM introduces several ambiguities and potential oversights. Key among them is the unclear use of 'structural components', a term the paper uses to describe feature-level insights that the method claims to provide. This concept, critical to WAM\\u2019s claims of 'what' explainability, lacks a clear definition or grounding in quantifiable relationships among components, making it difficult to ascertain whether these are indeed structural features rather than just relevant input attributes. Furthermore, while wavelet decomposition is introduced as a novel approach to attribution, the practical interpretability of multi-scale heatmaps remains underexplored in the paper. it is unclear how users can derive specific insights from these maps without a more explicit explanation. WAM\\u2019s assertion of state-of-the-art (SOTA) performance is another potential weakness, given that its comparisons rely largely on 2017 models like SmoothGrad, Grad-CAM, and Integrated Gradients, raising questions about whether the method is genuinely competitive in the context of more recent advancements in the XAI field. Additionally, the effectiveness of the faithfulness metrics used to benchmark WAM\\u2019s performance could benefit from further clarification, especially given the method\\u2019s claims of surpassing existing techniques across domains.\", \"questions\": \"1. Could you provide a more rigorous definition of 'structural components' and clarify how they differ from standard features in the context of explainability? Specifically, can we establish any meaningful relationships between these components based on their implicit structure?\\n\\n2. Since inter-scale dependencies are central to your claims, what specific dependencies does the wavelet domain preserve, and how does this preservation impact attribution in practice? For example, in the case of image explanations, presenting different scales of explanations does not seem to provide substantial additional insight.\\n\\n3. To what extent have you included newer, state-of-the-art models in your evaluation, and how might WAM perform with models developed after 2017, considering the rapid advancements in explainability techniques? Have you considered expanding the method to a self-explainable framework by introducing a novel loss term directly in the wavelet domain?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
50RNY6uM2Q
MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning
[ "Xiangyu Zhao", "Xiangtai Li", "Haodong Duan", "Haian Huang", "Yining Li", "Kai Chen", "Hua Yang" ]
Multi-modal large language models (MLLMs) have made significant strides in various image comprehension tasks. However, the majority of these models are constrained to processing low-resolution images, which limits their effectiveness in perception tasks that necessitate detailed visual information. In our study, we present MG-LLaVA, an innovative MLLM that enhances the model's visual processing capabilities by incorporating a multi-granularity vision flow, which includes low-resolution, high-resolution, and object-centric features. We propose the integration of an additional high-resolution visual encoder to capture fine-grained details, which are then fused with base visual features through a Conv-Gate fusion network. To further refine the model's object recognition abilities, we incorporate object-level features derived from bounding boxes identified by offline detectors. Being trained solely on publicly available multimodal data through instruction tuning, MG-LLaVA demonstrates exceptional perception skills. We instantiate MG-LLaVA with a wide variety of language encoders, ranging from 3.8B to 34B, to evaluate the model's performance comprehensively. Extensive evaluations across multiple benchmarks demonstrate that MG-LLaVA outperforms existing MLLMs of comparable parameter sizes, showcasing its remarkable efficacy.
[ "Multi-Modality", "Large Language Models" ]
https://openreview.net/pdf?id=50RNY6uM2Q
https://openreview.net/forum?id=50RNY6uM2Q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "j5pCcMV12j", "Km4nAyFi5c", "GCrTTYZP8d", "EHL3xkkf1O", "BpSahESDjl", "4DXIavJmf1" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1734190516475, 1730104409797, 1730443212183, 1730253886241, 1730715193473, 1730714804404 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2862/Authors" ], [ "ICLR.cc/2025/Conference/Submission2862/Reviewer_8mda" ], [ "ICLR.cc/2025/Conference/Submission2862/Reviewer_AQZ3" ], [ "ICLR.cc/2025/Conference/Submission2862/Reviewer_uMyd" ], [ "ICLR.cc/2025/Conference/Submission2862/Reviewer_hMBr" ], [ "ICLR.cc/2025/Conference/Submission2862/Reviewer_V5Vw" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are very grateful for the professional service provided by reviewers throughout the review process and for the valuable feedback offered.\"}", "{\"summary\": \"Summary:\\n\\nMG-LLaVA is a multi-modal large language model (MLLM) designed to improve visual processing capabilities by using a multi-granularity vision flow. This includes low-resolution, high-resolution, and object-centric features to enhance perception tasks requiring detailed visual information. Extensive experiments have validated its effectiveness.\", \"contributions\": \"1. Leveraging an additional open vocabulary detection model introduces multi-granularity object-level features to enhance the perceptual capabilities of MLLMs.\\n2. Extensive experiments demonstrate the effectiveness of the method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The structure of paper is simple and easy to read, and the model implementation is very easy to follow.\\n2. The idea is very straightforward, and the experiments are solid. It is reasonable to introduce multi-granularity object-level features to enhance the perceptual capabilities of Multimodal Large Language Models (MLLMs).\", \"weaknesses\": \"1. The idea appears incremental, as it simply integrates high-resolution image interpretation with region-level image understanding, resembling a trick\\n2. Experimental evaluations and fair comparisons are notably lacking. Given that multi-granularity features are utilized to augment the model's perceptual abilities, evaluations should be conducted on fine-grained perception datasets. General VQA is inadequate for assessing the fine-grained perceptual capabilities of MLLM. \\n3. Excessive reliance on additional detector inputs may result in suboptimal, non-end-to-end outcomes.\", \"questions\": \"1. Although the method performs well on the general VQA, it lacks a comprehensive assessment of fine-grained perception capabilities. It would be more fair and convincing to compare it with region-level methods like Next-Chat and Osprey on the RefCOCO dataset. This could be accomplished by using the bounding box of the corresponding target as input.\\n\\n2. It is evident that using object-level features can enhance the perception ability of MLLMs. However, incorporating additional detectors introduces extra computational costs and biases. An equitable efficiency comparison is necessary. if these added costs surpass the benefits from data expansion, parameter extension, or data filtering, it results in negative optimization, as I believe is the case with MG-LLaVA. From the performance comparison, when using Vicuna 7B, MMStar exhibits lower performance than other models, indicating data leakage risk and validating the risk of bias introduced by reliance on detectors.\\n\\n3. Although MG-LLaVA shows improvements in general capabilities, these enhancements are marginal. The added expense of using additional detection models and object-level features should yield a greater performance boost. Moreover, during inference, reliance on detection results from other models is cumbersome. Transforming external dependencies into self-mining processes could significantly enhance the practical utility of the model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel Multimodal Large Language Model (MLLM) that improves visual processing capabilities by incorporating a multi-granularity vision flow, which includes low-resolution, high-resolution, and object-centric features. This approach enhances the performance of current Large Language Models (LLMs), as demonstrated in the experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The integration and fusion of multi-granularity features with object-centric features is novel for MLLMs.\\n2. Experimental results demonstrate the effectiveness of the proposed pipeline.\\n3. The paper is well-written and clearly presented.\", \"weaknesses\": \"1. The performance improvement on similarly sized LLMs in Table 2 and Table 3 appears modest.\\n2. The ablation study would benefit from visual comparisons to illustrate the impact of each component, such as case studies or visualizations of feature-level effects.\\n3. Some failure cases should be shown to provide insights into the method\\u2019s limitations.\\n4. It is unclear if the method can handle larger images, such as 1024p or 2k resolutions.\", \"questions\": \"1. The performance improvement on similarly sized LLMs in Table 2 and Table 3 appears modest.\\n2. The ablation study would benefit from visual comparisons to illustrate the impact of each component, such as case studies or visualizations of feature-level effects.\\n3. Some failure cases should be shown to provide insights into the method\\u2019s limitations.\\n4. It is unclear if the method can handle larger images, such as 1024p or 2k resolutions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to improve the visual capabilities of MLMs (multimodal large models) by proposing a new model MG-LLaVA. limited to resources, most of MLMs nowadays just have low resolution inputs, which are challenging on fine-grained tasks. Therefore, this papet proposes a novel framework that introduces object-level feature in addition to high resolution visual encoder. Based these, the article also uses a gating-based fusion strategy as well as explicit integration on object-level feature. These approaches reduce the computational pressure introduced by high resolution images and simultaneously improve performance on fine-grained tasks. On MMBench and SEEDBench, the model outperforms even the private models GPT-4V and GeminiPro-V. The article also conducts extensive experiments to show that their framework achieves competitive scores on multiple datasets of images or videos.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The goal of this paper is to release the power of MLMs on fine-grained tasks. A high resolution visual encoder is introduced to make up for the complement of previous work. And some fusion and compression strategies are introduced to ease the computational pressure. In addition to this, the article demonstrates that this new framework achieves significantly higher scores on MLMs at several scales, which fully demonstrates the effectiveness of the method. Moreover, this is the first approach to introduce object-level features in the field of MLMs, and experimentally, the article demonstrates the ability of their method to achieve higher scores than private models under MMBench and SEEDBench.\", \"weaknesses\": \"1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object detection only do some information enhancement on the input side, while many MLMs can already accomplish the object detection task by themselves nowadays.\\n2. The scores achieved on both the MMBench as well as SEEDBench datasets, while respectable, are not compared to some of the more competitive models. I identified MMB as version 1 and SEEDBench as Avg based on the scores of Qwen-VL and MiniCPM-V2, and there are a number of scores on both leaderboards that are higher than the scores of MG-LLaVA work, eg. Honeybee (Cha et al., 2024), AllSeeing-v2 (Wang et al. 2024) based on Vicuna-13b at MMB-test. and then you can also find a lot of similar models with higher scores on the same substrate.\\n3. In addition to Perception Benchmarks. this problem can also be found in Visual QA and Video QA. such as on the MSRVTT-QA dataset. there are already many models with very high scores in 2024. Some of them also use some methods to improve the model's ability on fine-grained tasks. eg. Flash-VStream (Zhang et al. 2024) Monkey (Li et al. 2023). The article does not seem to compare these new 2024 models.\\n\\nTo summarize, I think the approach proposed in the article is valid, but MG-LLaVA does not do the job of making a difference, either from an innovation perspective or from a performance perspective.\\n\\n[1] Cha, Junbum, et al. \\\"Honeybee: Locality-enhanced projector for multimodal llm.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\\n\\n[2] Wang, Weiyun, et al. \\\"The all-seeing project v2: Towards general relation comprehension of the open world.\\\" *arXiv preprint arXiv:2402.19474* (2024).\\n\\n[3] Zhang, Haoji, et al. \\\"Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams.\\\" *arXiv preprint arXiv:2406.08085* (2024).\\n\\n[4] Li, Zhang, et al. \\\"Monkey: Image resolution and text label are important things for large multi-modal models.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\", \"questions\": \"1. The SEEDBench mentioned in the article uses SEEDBench-Image, but I checked the scores for leaderboard and the other methods mentioned in the paper, and they seem to correspond to SEEDBench-Avg (which contains both video and image), so it's not clear to me whether the comparison here includes scores from the video task.\\n2. If an open vocabulary detector is used, why is a tagger used to determine the bounding box instead of generating ROI directly based on text embedding?\\n3. The article suggests that this approach is intuitively better for small target comprehension or counting tasks, are there any datasets in this area that show that this approach has more significant performance gains on specific tasks?\\n4. I found that the Monkey model uses a similar idea to enhance the performance of the model and also proposes to augment the data with traditional CV methods for refinement, is there a comparison to this approach in the paper? For example, changing the base LLM to Qwen-7b to compare with Monkey (Li et al. 2023) and more models on this field.\\n\\n[1] Li, Zhang, et al. \\\"Monkey: Image resolution and text label are important things for large multi-modal models.\\\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an MLLM architecture to improve the multi-granularity visual understanding abilities of multimodal models. The method follows the idea proposed in Mini-Gemini to fuse high and low-resolution visual encoders and adds object recognition from other foundation models to enhance object-level understanding ability. A series of models ranging from 3.8B to 34B are proposed and the models are evaluated on multiple popular benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-organized. The proposed model is evaluated on multiple tasks including general visual understanding benchmarks, VQA, and video datasets. Ablation study and runtime evaluation are also provided.\"], \"weaknesses\": \"- The technical contribution of the paper is not very significant. The paper claims the main contribution is combining low, high-resolution, and object-level features. But the design of combining low and high-resolution features mainly comes from mini-Gemini and some modifications on the fusion module are proposed in the paper. The introduction of object-level features requires extra models and makes the base architecture more complex.\\n\\n- I am not convinced about the necessity of introducing the extra object-level information. Recent state-of-the-art MLLMs like Qwen2 VL [r1], LLaVA-Onevision [r2] and Pixtral [r3] all adopt a single encoder solution. I think the MLLMs themselves should have the ability to capture object-level information from the images with sufficient data and a proper training strategy. The method proposed in the paper may reduce the requirement for training data but the more complex architecture also makes the solution less general. Besides, the method also didn't show large improvements on SEED (69.4 vs. 68.9) and MMStar (35.1 vs. 37.6) compared to Mini-Gemini with the same base LLM. \\n\\n- Many recent MLLMs like [r2, r4] are not compared. Compared to these methods, the proposed solution is not strong enough and imore complex. \\n\\n[r1] Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution\\n\\n[r2] LLaVA-OneVision: Easy Visual Task Transfer\\n\\n[r3] Pixtral 12B\\n\\n[r4] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs\", \"questions\": \"Please refer to my comments above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes MG-LLaVA to improve the capability for recognizing multi-granularity features for current MLLMs and flatten the restriction of the resolution in visual inputs. MG-LLaVA includes the low-resolution, high-resolution, and object-level features altogether and fuses them with a conv-gate fusion module for the general visual features. The object ROIs are extracted by a pre-trained detection model for better object-level understanding skills. The paper proposes a series of MLLMs ranging from 3.8B to 34B based on various LLMs and shows strong performance across image and video multimodal benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The key claim for the paper that multi-granularity features with low-res, high-res, and object features can improve detailed understanding and object recognition skills is reasonable. The authors design the conv-gated fusing module and demonstrate its effectiveness through complete ablation studies.\\n2. The series of models and benchmarks are clear and complete. The authors train the variants for MG-LLaVA based on Phi, Vicuna, LLaMA3, and Yi1.5 and conduct experiments on various multi-modal benchmarks. \\n3. The overall architecture of the paper is well-structured and easy to follow.\", \"weaknesses\": \"1. The idea of fusing multi-granularity features is not novel, as integrating low-resolution and high-resolution images has been demonstrated effect by a range of works, including LLaVA-NeXt, LLaVA-HR, Mini-Gemini, LLaVA-UHD, etc. The difference in MG-LLaVA lies in the usage of detected objects. However, the detection operation introduces extra computational costs and external models with extra information, which is not an optimal solution.\\n2. The performance comparisons against existing MLLMs are relatively weak. For example, the results of MG-LLaVA equipped with Vicuna-7B do not surpass baselines with similar efforts on some benchmarks, including SQA, TextVQA, MMStar, etc. The overall number of training data is significantly heavier than the baselines (2.6M), which makes the comparisons more unfair. \\n3. Some key ablations seem lacking. The authors are encouraged to clearly show the contributions for performance with every part of the visual feature to show the difference between previous works.\", \"questions\": \"The concerning questions are stated in the weakness section. Based on the weaknesses listed above, I lean to reject this manuscript at its current version.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
506BjJ1ziZ
COME: Test-time Adaption by Conservatively Minimizing Entropy
[ "Qingyang Zhang", "Yatao Bian", "Xinke Kong", "Peilin Zhao", "Changqing Zhang" ]
Machine learning models must continuously self-adjust themselves for novel data distribution in the open world. As the predominant principle, entropy minimization (EM) has been proven to be a simple yet effective cornerstone in existing test-time adaption (TTA) methods. While unfortunately its fatal limitation (i.e., overconfidence) tends to result in model collapse. For this issue, we propose to \textbf{\texttt{Co}}nservatively \textbf{\texttt{M}}inimize the \textbf{\texttt{E}}ntropy (\texttt{COME}), which is a simple drop-in replacement of traditional EM to elegantly address the limitation. In essence, \texttt{COME} explicitly models the uncertainty by characterizing a Dirichlet prior distribution over model predictions during TTA. By doing so, \texttt{COME} naturally regularizes the model to favor conservative confidence on unreliable samples. Theoretically, we provide a preliminary analysis to reveal the ability of \texttt{COME} in enhancing the optimization stability by introducing a data-adaptive lower bound on the entropy. Empirically, our method achieves state-of-the-art performance on commonly used benchmarks, showing significant improvements in terms of classification accuracy and uncertainty estimation under various settings including standard, life-long and open-world TTA, i.e., up to $34.5\%$ improvement on accuracy and $15.1\%$ on false positive rate. Our code is available at: \href{https://github.com/BlueWhaleLab/COME}{https://github.com/BlueWhaleLab/COME}.
[ "Test-time adaption", "Out-of-distribution generalization" ]
Accept (Poster)
https://openreview.net/pdf?id=506BjJ1ziZ
https://openreview.net/forum?id=506BjJ1ziZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uSnEssQfOn", "tHWijz4HH8", "pzSaUjNdxU", "kToRSE8pL4", "jwvfjEM80x", "hP1whBOBe4", "elI0cAYc7F", "YGbGbjZKPJ", "Y7lCXKuvEg", "Xb6E6bj26O", "WmHKPNrApS", "Vp0vz2Rkif", "S6rT5I7IFf", "RsuIKniT99", "QR0pnFmEJs", "PpIMojtmQK", "PELrbU0oT2", "OCQJnttW31", "O5IcPnPtSJ", "MByjBZ20Rc", "L5z9Bd5Pum", "L2xGirkAIb", "HgBfa5y9zJ", "H9Jv9HICmd", "En1bldXFHK", "EPG3F0tsfm", "D5YuBuykzj", "BbZvpIFHUx", "8zePqOBXPQ", "8btFfDs52p", "5kPVYyhgxk", "5hcbFTwwq5", "5Qk7iro0LH", "1zwa2x0Tqs" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523408116, 1733037797708, 1732677650348, 1732530381827, 1732548864777, 1732629037926, 1732520580279, 1732610941089, 1732173557555, 1730739040015, 1734559182985, 1730822042654, 1732423816863, 1733225978580, 1730436468728, 1732371896235, 1733231514466, 1732533918350, 1732095424708, 1732095548273, 1732513350906, 1730090949091, 1732094887093, 1733115912353, 1732539396850, 1733222704835, 1732199080473, 1732095168482, 1732525645418, 1732370259768, 1732095636427, 1732095090597, 1732607409380, 1732602572044 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_WHFP" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_5e1E" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_4L2X" ], [ "ICLR.cc/2025/Conference/Submission632/Area_Chair_B2GA" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_saue" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_WHFP" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_saue" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_5e1E" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_4L2X" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_4L2X" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_5e1E" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Authors" ], [ "ICLR.cc/2025/Conference/Submission632/Reviewer_5e1E" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear Reviewer 4L2X,\\n\\nThanks for your feedback. In our response, we provided a detailed explanation for the choice of datasets and clarified that our method still outperforms its counterparts under 17 diverse distribution shifts in total (ImageNet-C, ImageNet-S and ImageNet-R) when there is no outlier. We also provided proof of Lemma 1 in Appendix A.\\n\\nWe value your comments on our method. However, given that you are the only one who is negative about our paper, and the discussion period is ending soon, please feel free to let us know if you have any further comments on our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Thanks for your support\", \"comment\": \"Thank you for your support and the additional suggestions regarding [1]. We will certainly discuss this strategy in our revision. If there are any further insights, clarifications, or questions you would like to share with us, please feel free to reach out. We truly appreciate your positive assessment and time in reviewing our paper.\"}", "{\"title\": \"Looking forward to your reply\", \"comment\": \"Dear reviewer WHFP,\\n\\nWe appreciate your efforts already put into evaluating our work. We posted rebuttal and a new revision including additional results on two large-scale benchmarks. As the discussion period is nearing its conclusion, could we kindly inquire if you have any remaining questions or concerns? Thanks for your valuable suggestions.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for your reply and new comments. We feel there exists significant misunderstanding and we would like to clarify\\n\\n- there is no derivation for this (Lemma 1)\\n\\n**In fact, we have provided the full proofs in Appendix A, line 717-724.**\\n\\n- the model confidence upper bound in Theorem 1 is based on the assumption of the original (precise) objective Equation. 6\\n\\nAs we do have provided the proof of Lemma 1, this assumption is reasonable. Besides, as mentioned by other reviewers (5e1E and saue), Theorem 1 can aid to understand the proposed method.\\n\\n- Why should we add outliers to test the test-time adaptation methods?\\n\\n(1) Test-time adaption performs during the models' deployment in the real-world, where it is evitable to encounter outliers. There have been **6 works [3-8] published on top-venue are of the same setting**. (2) **Open-world TTA is one of the three settings we considered.** We also conduct standard TTA experiments on ImageNet-C in Table 1. **Our method still outperforms its counterparts when there are no outliers**. (3) To further address the concerns on dataset, we conduct additional experiments on yearbook, a dataset from the suggested **WILD-time**. Since the official repo of WILD-time is not actively maintained, we manually download the dataset from this unofficial repo: https://github.com/wistuba/Wild-Time-Data/.\\n\\nClassification accuracy on Yearbook (no adapt acc=84.5, backbone is a 6-layer fully convolutional network)\\n\\n| | Tent | EATA | SAR |\\n| ---- | ----- | ----- | ----- |\\n| EM | 85.31 | 86.58 | 86.80 |\\n| COME | 86.90 | 87.56 | 87.30 |\\n\\nWe promise to integrate the results into our latest revision. We are trying to conduct more experiments on other datasets from WILD-time like FMOW-time and the results will be integrated if time permit. Would you mind checking our responses and consider re-evaluating our manuscript? Thanks for your attention and best regards.\"}", "{\"comment\": \"Thank you for your response and new experimental results. I believe my main concerns were addressed. As a result, I will increase my score from 5 to 6 accordingly.\", \"here_is_the_minor_concern\": \"I mentioned that \\\"The core concept of this paper shares some similarity with research on learning with rejection.\\\" By this, I meant that the use of a margin to reject predictions in [1] is conceptually similar to the approach proposed in this work. However, this work uses entropy to make decisions due to its unsupervised setting. It should be beneficial to discuss this line of research to make the paper more self-contained.\"}", "{\"title\": \"Thanks for your support\", \"comment\": \"Thanks for your support and follow-up suggestions.\\n\\nWe have carefully revised the manuscript following your latest suggestions, in which we detail the implementation of subjective opinion and theoretical assumptions. Besides, we identify the the limitations of current used regularization techniques in the conclusions: The simplicity of the uncertainty regularization used in our implementation is both an advantage and a limitation. On the one hand, constraining the uncertainty mass close to the pretrained model is easy-to-deploy and meets the efficiency requirements of TTA. On the other hand, this regularization may be less effective when the pre-trained model is also overconfident. We identify this as a limitation of our work. Exploring more effective regularization techniques for a better trade-off between the practical requirements of TTA and accurate uncertainty estimation could be a promising future direction.\\n\\nWe feel very lucky to get such high-quality reviews and sincerely thank you and other reviewers for your professional suggestions. It's our duty to continue working on this project and ensure it meets the expectations of ICLR community.\"}", "{\"title\": \"Additional results on dynamic environments (temporal distribution shifts)\", \"comment\": \"Dear reviewer WHFP,\\n\\n\\nDue to time limit, we have just completed the experiments results on yearbook from **WILD-time**. We agree with you that temporal distribution shifts is an interesting setting. **Since it is a very novel research direction, the benchmarks in this field are very limited and the repo of the original WILD-time is not actively maintained.** Thus we download the dataset from an unofficial third-part repo and also follow the very recent work [1] to simulate temporal distribution shifts. Specifically, we rotate the test image from ImageNet-R by a certain degree per batch. The results are as follows\\n\\nClassification accuracy on Yearbook (no adapt acc=84.5, backbone is a 6-layer fully convolutional network)\\n\\n| | Tent | EATA | SAR |\\n| ---- | ----- | ----- | ----- |\\n| EM | 85.31 | 86.58 | 86.80 |\\n| COME | 86.90 | 87.56 | 87.30 |\\n\\nClassification accuracy on ImageNet-R under temporally dynamic distribution shifts (no adapt acc=22.26, backbone is ViT-base)\\n\\n| | Tent | EATA | SAR |\\n| ---- | ----- | ----- | ----- |\\n| EM | 24.44 | 23.45 | 24.48 |\\n| COME | 27.21 | 25.63 | 28.90 |\\n\\n\\n\\n[1] Koebler, Alexander, et al. \\\"Incremental Uncertainty-aware Performance Monitoring with Labeling Intervention.\\\" *NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty*.\"}", "{\"title\": \"Reply\", \"comment\": [\"Thank you for your time and great effort. I have gone through other reviews, and the author's response addressed several concerns raised in my review: writing style, hyperparameter sensitivity, and relations of the proposed method to other models and tasks. I also acknowledge the additional experiments.\", \"Let me share some additional comments.\", \"Error bars should be included in the additional experimental results provided in the author's response.\", \"Could you indicate where the additional experiments are integrated into the paper?\", \"For clarification, could you explain the source of the error bars (standard deviation)? Are they derived from random seeds for network initialization and mini-batches? I would appreciate the inclusion of additional details to support reproducibility.\"]}", "{\"summary\": \"The authors propose Conservatively Minimizing Entropy, a method for test-time adaptation (TTA) that improves test-time adaption by managing prediction uncertainty. Unlike traditional entropy minimization, which can lead to overconfidence, COME uses a Dirichlet distribution to model uncertainty, allowing the model to avoid forced classification on unreliable data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-motivated, and the story makes sense.\", \"Extensive experiments have been done to support the proposed method.\"], \"weaknesses\": [\"My major concerns include:\", \"For the proposed method: Why Dirichlet distribution is used? How is the Dirichlet distribution related to the final algorithm in Algorithm 1. In addition, what is the role of delta in Algorithm 1? It seems that the authors tell a long story about their algorithm, but the algorithm itself is rather simple.\", \"For the theoretical analysis: Could the authors provide a more detailed (theoretical) comparison between the proposed method and traditional EM? What is the benefit?\", \"For baselines: Some baselines are missing, for example, [1] and [2].\", \"For the datasets: I'm curious why the authors follow literatures on outliers detection.\", \"For Theory 1: What is the exact benefit of the upper bound of model confidence? I think it will also hurt the performance on some \\\"confident\\\" samples.\", \"[1] Nado, Z., Padhy, S., Sculley, D., D'Amour, A., Lakshminarayanan, B., & Snoek, J. (2020). Evaluating prediction-time batch normalization for robustness under covariate shift. arXiv preprint arXiv:2006.10963.\", \"[2]Zhou, A., & Levine, S. (2021). Bayesian adaptation for covariate shift. Advances in neural information processing systems, 34, 914-927.\"], \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies the issue of model collapse in entropy minimization algorithms for test-time adaptation. It proproses a entropy minimization approach that models prediction uncertainty by a Dirichlet prior distribution over model predictions. This method regularizes the model to favor conservative confidence for unreliable samples due to softmax. Experiments on typical benchmark datasets demonstrate the effectiveness of the proposed algorithm. I recommend to accept and suggest the authors to conduct one additional experiment that checks the approximation for Eq. (6). In the experiments, two-layer neural networks are sufficient to verify this if the constrained optimization is non-easy to be solve for deep neural networks.\", \"additional_comments_on_reviewer_discussion\": \"After the rebuttal, most of the issues are handle. Only one issue left is that: no proof is given for Lemma 1 that guarantees the constraints in Eq. (6). I checked the authors' comments and understood the reviewer's concern. I suggest the author to add one additional experiment to compare the setting of using Eq. (6) and its approximation. In the experiments, two-layer neural networks are sufficient to verify this if the constrained optimization is non-easy to be solve for deep neural networks.\"}", "{\"summary\": \"This paper addresses the model collapse of the popular Entropy Minimization algorithm for Test-Time Adaptation. Motivated by the observation that the amplification of model over-confidence causes model collapse, this paper proposes to minimize entropy with respect to an augmented output distribution that includes the probability to reject a sample, which is an uncertainty estimation technique known as subjective logic. Moreover, a logit normalization is designed in order to avoid degenerated solutions. Theoretical analysis reveals that the resulting approach upper bounds model confidence during adaptation based on the sample-specific confidence of the initial model. The resulting algorithm, COME, can be easily embedded into general EM-based TTA methods with a few lines of code revision. Experiments across TTA, open-world and life long TTA settings demonstrate a significant and consistent improvement upon current EM baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper accurately spots the paradox of EM's learning objective: minimization of entropy leads to over-confidence. And the paper proposes a simple yet effective solution to minimize entropy with respect to a probability distribution that faithfully estimates the uncertainty without over-confidence. It is a very reasonable idea to differentiate between the statistics used for prediction and for uncertainty estimation, which has long been considered the same in the TTA literature. Therefore, the algorithm enjoys the feature that the entropy minimization can be tailored to samples with different uncertainty, which is also supported by the monotonicity result in Theorem 1. The introduction of SL for uncertainty estimation is natural and perfectly compatible with softmax functions used for training models in most cases. As a result, the implementation is light-weight, model-agnostic, and extremely easy to embed into any TTA algorithms based on the EM objective. The experiments are convincing by covering both standard TTA tasks and more challenging settings of open-world TTA. A surprisingly significant 34.5% improvement on accuracy is reported on the model of SAR. And the algorithm has further addressed uncertainty estimation under continual distribution shift as a side product, which itself is also an important problem.\", \"weaknesses\": \"These are not necessarily weaknesses but rather some questions that I would like to confirm with the author.\\n1. How does the algorithm ensure that $b_k$ is non-negative for the computation of entropy, since $b_k$ is implemented as \\n$(e^{f_k(x)}-1)/ \\\\sum_{k'} e^{f_k'(x)}$ which could be negative?\\n\\n2. Why does the algorithm keep $u$ close to $u_0$? Does it imply that the uncertainty estimation for the pretrained model is trusted? What if the pretrained model is over-confident? What about the alternative constraint $u \\\\geq u_0$ which seems to be more conservative as is the objective of COME?\\n\\n3. Average false positive rate is used in experiments to assess uncertainty estimation. However, FPR measures the correctness of uncertainty estimation with such a binary perspective: for the samples we predict 1, what is the actual proportion of 0. Uncertainty estimation considers a more sophisticated question: for the samples we predict with a probability 0.7, is the actual proportion of 1 exactly 0.7? Expected calibration error (ECE) is a better metric in this sense.\\n\\n4. Are there standard errors of the reported results?\", \"questions\": \"1. The tightness for the upper and lower bounds in Lemma 1 is determined by the choice of p. By considering the simple model where f(x) outputs the same logit for all classes, the ratio between the upper and lower bound is minimized by $p=\\\\infty$. Is is a better choice to consider $\\\\| f(x) \\\\|_\\\\infty = \\\\max_k | f_k(x) |$?\\n\\n2. The usage of exp transformation of logits in the softmax function appears pivotal to the proof of Theorem 1. And if we take b(x) as a general non-negative function of f(x), the upper bound may reduce to 1. And minimizing the entropy of opinion in equation 6 can also lead to the Dirac distribution, which is over-confident. Is there a characterization for the class of functions that could be used to form a reasonable belief b(x)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to your reply\", \"comment\": \"Dear Reviewer, AC, SAC and PC,\\n\\nWe thanks all the reviewers for your valuable comments and for recognizing our contribution of reasonable and well-motivated idea (Reviewer saue, 4L2X), interesting design (Reviewer 5e1E), light-weight, model-agnostic, simple-to-integrate and extremely easy-to-implemented method (Reviewer saue, WHFP), extensive experiments with surprisingly significant, satisfactory improvement (Reviewer saue, 4L2X, WHFP), theoretical analysis that can aid understanding (Reviewer 5e1E). During rebuttal, we address the reviewers concerns with the following updates and improvements:\\n\\n1. Full results with standard deviations (Table 5 in Appendix C.1, page 17).\\n2. New baselines, i.e., batch normalization (arxiv 2020) and using Bayesian ensemble for TTA (NeurIPS'21). The results are in Table 8, Appendix C.4, page 18.\\n3. Two new benchmarks, i.e., ImageNet-R and ImageNet-S instead of the suggested unactively maintained wild-time (Table 11, 12 in Appendix C.6, page 19). We appreciate the kindly understanding from the reviewer.\\n4. Discussion about the advantages of the proposed method beyond classic learning with rejection methods in unsupervised online TTA setting (line 210-220).\\n5. Math details and more clarification of the established theory (line 287-297).\\n6. Point-by-Point Responses: Comprehensive responses to specific reviewer comments.\\n\\nWe highlight them in **blue** in the latest revision. As the ICLR public discussion phase is ending soon, we kindly encourage you to share any feedback or questions on our submission while there\\u2019s still time. We\\u2019d be happy to address any concerns or provide clarifications to assist with your evaluation. We wish you success in your own research too.\"}", "{\"comment\": \"Thanks for your comments. **It is our duty to correctly grasp your concerns and address them.**\\n\\n- there is no proof to show that Lemma 1 guarantees the constraints in Equation (6)\\n\\nAs shown in Lemma 1, the uncertanty mass is constrained by the norm of evidence. Thus we can constrain on the norm instead\\nof directly calculating the uncertainty mass and comparing them. That is, if the norm is unchanged, we can expect the uncertainty mass not diverge too far away. The logic here is natural, straightforward and recognized by other reviewers.\\n\\n- In Theorem 1, the constraints in Equation (6) is directly assumed...there is a mismatch between them (Eq.8 and Eq.6).\\n\\nWe understand your point here and would like to argue that (1) Eq.8 is an **effective and practical approximation** of Eq.6, which do not strictly ensure exactly unchanged evidence norm due to the complexity of modern deep neural networks. Directly enforcing the invariance of the evidence's norm from Lemma 1 would lead to practical difficulties (inevitably requiring the storage of two copies of the model and explicit comparison). (2) **Such simplification can bring many practical benefits.** As mentioned by reviewer 5e1E, this is an **effective** and **interesting** design which significantly simplified the optimization. **Reviewer saue** mentioned that our method is **simple yet effective**, and the simplicity is its most **compelling** feature. (3) **Empirical study can support this design.** Please note that we provide empirical evidence that can support our theory in Figure 2, page 9 (the model confidence is much conservative as we expected). Our method achieves superior performance in practice. On large-scale ImageNet benchmarks under 15 diverse corruptions, our method substantially improve the average classification accuracy under standard TTA setting\\n\\n| | Tent | EATA | SAR | COTTA | MEMO |\\n| ----------- | ---- | ---- | ---- | ----- | ---- |\\n| EM | 52.8 | 62.1 | 54.2 | 46.1 | 42.3 |\\n| COME | 61.2 | 64.5 | 64.2 | 49.1 | 43.2 |\\n| Improvement | **8.4** | **2.4** | **10.1** | **3.0** | **0.9** |\\n\\nThus we believe our method is reasonable, the explainary theory is helpful to aid understandanding (as mentioned by reviewer 5e1E), and the effectiveness of our design choice can be supported by extensive empirical observations.\\n\\nAccording to your comments, we will add this sentence below Eq.8 in our final revision to **make this point transparent**: `` Due to the complexity of modern deep neural networks, Eq.8 can not ensure exactly unchanged uncertainty mass. However, this approximation is effective and significantly simplifies the implementation. Empirical evidence supports this design can be found in...''. We greatly appreciate your open mind if you could further consider our clarification.\"}", "{\"summary\": \"This paper investigates the issue of model collapse in entropy minimization algorithms for test-time adaptation. The authors propose a novel entropy minimization approach that models prediction uncertainty by defining a Dirichlet prior distribution over model predictions. This method regularizes the model to favor conservative confidence for unreliable samples. Experiments on benchmark datasets demonstrate the effectiveness of the proposed algorithm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed algorithm introduces a rejection mechanism for unreliable samples in the TTA process, preventing the model from learning from potentially noisy labeled data. It is simple to integrate into existing TTA frameworks, and the experimental results indicate satisfactory performance.\", \"weaknesses\": \"The proposed method and its theoretical analysis rely heavily on existing techniques, which limits its technical novelty.\\n\\nThe core concept shares some similarity with research on learning with rejection. It is recommended to discuss how the proposed loss function compares with the loss functions used in learning with rejection, as outlined in [1].\\n\\nThere is a lack of experiments involving real-world applications with distribution shifts, as exemplified in [2]. Testing the proposed algorithm on real-world data streams in dynamic environments is suggested to validate its robustness.\", \"references\": \"[1] Cortes, Corinna, Giulia DeSalvo, and Mehryar Mohri. \\\"Learning with rejection.\\\" Algorithmic Learning Theory (2016).\\n\\n[2] Yao, Huaxiu, et al. \\\"Wild-time: A benchmark of in-the-wild distribution shift over time.\\\" Advances in Neural Information Processing Systems 35 (2022): 10309-10324.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your reply. Yes, we have uploaded a latest revision in which we make efforts in addressing the aforementioned presentation issues **just a few minutes ago before your newest comments**. Following your suggestions, we have\", \"carefully condensed the introduction of related works and overconfidence issues (remove fig.2 and many sentences in Section 2 and 3 in the original manuscript)\", \"provided a detailed explaination on design choices (line 209-220, 280-294)\", \"illustrated possible generalization of the proposed method (line 235-238)\", \"avoided using ``avg'' as a metric in the experiments (Table 1 and 2 in page 8)\", \"clarified the source of standard deviation (line 843-847)\", \"We highlight the revised section in our manuscript in blue. We are deeply encouraged for your raising score. **It is our duty to address the presentation issues.** We will keep actively working on this. Your feedback is very valuable for us.\"]}", "{\"comment\": \"Dear reviewer 4L2X,\\n\\nTo further address your concern. We conduct additional experiments to investigate how the p-norm of evidence and uncertainty mass $u$ in our method change during TTA as additional empirical support to our theory. Here we show the mean value of 2-norm of evidence $b$ and uncertainty mass $u$ on ImageNet-C Gaussian noise. As we can see, our constraint makes the norm and $u$ to not diverge far from the pretrained model as we expected. This phenomenon can also support the assumption in Theorem 1.\\n\\nmean value of 2-norm of evidence\\n| No Adapt | EM | COME |\\n| -------- | ----- | ----- |\\n| 52.35 | 57.23 | 51.46 |\\n| | +**4.88** | -**1.11** |\\n\\nmean value of uncertainty mass $u$\\n| No Adapt | EM | COME |\\n| -------- | ----- | ----- |\\n| 0.1932 | 0.1159 | 0.2002 |\\n| | **-0.0764** | +**0.0070** |\\n\\n**Though we have no chance to discuss with you after your last comment, which was posted an hour before the discussion deadline, we hope our efforts can alleviate your concerns. We are frustrated to see that you raised your confidence level to an absolutely high 5 just an hour before the end of the discussion period after we have 1) provided comparisons with the suggested baselines 2) clarify the details of our method 3) provided empirical evidence to support the theory 4) evaluated our method on 17 diverse distribution shifts and cited 6 papers to explain the setting. At last, we sincerely thank you for the time and efforts you have put into our work and we authors will keep working to make it better.**\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer 4L2X,\\n\\nWe have posted rebuttal and a new revision including the suggested baselines and detailed clarifications. As the discussion period is ending soon, we kindly inquire if you have any remaining questions or concerns? Any insights, comments or questions that you would like to share on our manuscript is highly appreciated. Thanks for your efforts in reviewing our work and we sincerely looking forward to your reply.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal WHFP\", \"comment\": \"We thank the reviewer for the thoughtful and thorough comments on our paper.\\n\\n- The proposed method and its theoretical analysis rely heavily on existing techniques, which limits its technical novelty...It is recommended to discuss how the proposed loss function compares with the loss functions used in learning with rejection, as outlined in [1].\\n\\nThanks for your comments. Here, we would like to discuss the differences between the proposed COME and the most related works including calibration and Bayesian uncertainty estimation methods. **Please kindly remind that in a fully TTA setting, we can only access unlabeled test samples coming online**. Calibration methods rely on an additional validation set which is not suitable for TTA task. Other Bayesian uncertainty estimation methods like ensembling, BNNs and dropout need multiple inferences or modifications on model architecture which is also unsuitable. Compared to most related subjective logic which needs labeled training data for superior uncertainty estimation. Our COME is specified for unsupervised online TTA settings. The idea of minimizing the entropy of opinion, the design of uncertainty constraint, and the analysis on model confidence are newly proposed in this work, which makes our method diverge far from existing works and staisfy the practical requirements of efficient and stable TTA.\\n\\n- Testing the proposed algorithm on real-world data streams in dynamic environments is suggested to validate its robustness.\\n\\nThanks for your valuable suggestion. We agree that real-world data streams in dynamic environments is closely related to the proposed method. In table 3, we have provided the results under **lifelong** TTA settings where the test environments vary dynamically (e.g., **continuously varying** weathers like snow, frost, fog...). Such setting is also considered in recent TTA works specified for dynamic environments [2] [3]. We conduct additional experiments on the suggested benchmark [4] to further evaluate the proposed method. **Please kindly note that the official repository of the suggested wild-time benchmark is no longer actively maintained.** Instead, we introduce two more additional datasets, i.e., ImageNet-A and ImageNet-Sketch, to further evaluate the proposed method on commonly used OOD generalization benchmarks.\\n\\nClassfication accuracy on ImageNet-R (no adapt acc=35.15)\\n\\n| | Tent | EATA | SAR | COTTA |\\n| ---- | ----- | ---- | ---- | ----- |\\n| EM | 37.73 | 36.11 | 36.77 | 36.04 |\\n| COME | 39.05 | 38.22 | 41.22 | 37.39 |\\n\\nClassification accuracy on ImageNet-S (no adapt acc=27.86)\\n\\n| | Tent | EATA | SAR | COTTA |\\n| ---- | ----- | ---- | ---- | ----- |\\n| EM | 31.63 | 39.26 | 33.92 | 30.84 |\\n| COME | 39.22 | 41.85 | 43.52 | 33.82 |\\n\\n[1] Learning with rejection, ALT'16\\n\\n[2] Towards stable test-time adaptation in dynamic wild world. ICLR'23\\n\\n[3] Efficient test-time model adaptation without forgetting. ICML'22\\n\\n[4] Wild-Time: A Benchmark of in-the-Wild Distribution Shifts over Time. NIPS'22\"}", "{\"title\": \"Rebuttal 5e1E (1/2)\", \"comment\": \"We appreciate your valuable and helpful comments on our paper and for recognizing the clear motivation and interesting idea. What you said makes sense and inspires us to think a lot.\\n\\n- Experiments on the dependence of hyperparameters are lacking. I would like to see the results and discussions when and discussions when $p\\\\neq2$ and $\\\\tau\\\\neq 1$.\\n\\nFollowing your suggestion, we report the accuracy on ImageNet-C Gaussian noise level 5 with different hyperparameters. We implement our COME with Tent. The accuracy of the original Tent using entropy minimization is 52.6.\\n\\n| | p=1 | p=2 | p=3 | p=$\\\\infty$ |\\n| ---------- | ---- | ---- | ---- | ---------- |\\n| $\\\\tau=0.5$ | 37.8 | 38.5 | 39.1 | 41.3 |\\n| $\\\\tau=1$ | 53.3 | 53.8 | 53.2 | 47.2 |\\n| $\\\\tau=1.2$ | 54.6 | 54.7 | 53.8 | 48.6 |\\n| $\\\\tau=1.5$ | 54.4 | 54.2 | 53.2 | 48.8 |\\n\\nOur COME generally outperforms EM with moderate hyperparamters.\\n\\n- While the proposed method is technically sound, it would benefit from discussion in a broader context\\u2014such as unsupervised learning with a pretrained model, unsupervised domain adaptation, source-free domain adaptation, and semi-supervised learning\\u2014to emphasize its wide applicability as a key contribution.\\n\\nWe deeply appreciate this thoughtful and thorough suggestions. Here we discuss the settings you mentioned with additional experimental results.\\n\\nWe agree that there is a strong connection between Source-Free Domain Adaptation (SFDA) and Test-Time Adaptation (TTA). TTA focuses on **online** adjusting during the testing. On the other hand, SFDA approaches generally perform **offline**. That is, the inference is deferred until the optimization is done. In contrast, our TTA method can achieve adaption and inference at the same time.\\n\\nTo further validate the applicability of our method, we report the classification accuracy on ImageNet-C Gaussian noise level 5 under **source-free domain adpation settings**. The baselines we considered include pesudo label (PL), mutual information maxmization (IM), and entropy minimization (EM) following [1]. \\\"-\\\" means the model accuracy collapses to random guess level.\\n\\n| Epoch | PL | EM | IM | COME (Tent) | COME (EATA) | COME (SAR) | |\\n| ----- | ----- | ---- | ----- | ----------- | ----------- | ---------- | ---- |\\n| 1 | 31.55 | 0.55 | 60.71 | 66.50 | 68.38 | 66.83 | |\\n| 2 | - | - | 63.75 | 68.44 | 69.88 | 67.53 | |\\n| 3 | - | - | 65.07 | 69.20 | 70.07 | 68.06 | |\\n\\n- Error bars are missing.\\n\\nAccording to your comments we provide the full results with standard deviation in **Appendix C.6** of the revised paper. Due to the large scale of ImageNet-C benchmark, the performance is stable with a relatively small standard deviation.\\n\\n- The performance metric \\\"Avg.\\\" in the tables are nonsense, while it is actually a bad convention in the field. Obviously, a \\\"1% gain\\\" is quite different in iNaturalist and SSB-Hard, for example.\\n\\nThanks you for this suggestion. We will avoid using such metric in the revision.\\n\\n- A large part of the paper is dedicated to reviewing previously known works, such as the overconfidence problem, which hinders readability. Much of the paper, particularly the description of the proposed method, is redundant. A more direct tone is generally preferable in academic writing.\\n\\nWe appreciate your constructive comments on writing. We will carefully condense the introduction of related work and add more experimental results suggested by the reviewers instead.\\n\\n- What if a hyperparameter $\\\\lambda$ is introduced as $-\\\\sum_{k=1}^K b_k\\\\log b_k-\\\\lambda u\\\\log u$? This seems to be a straightforward generalization of Eq. (6).\\n\\nThis is an interesting idea. We believe that introducing this hyperparameter is practical. The magnitude of $\\\\lambda$ reflects our confidence in whether the input sample should be classified into one of the known K class. To validate such an idea, we conduct additional experiments on the influence of different $\\\\lambda$.\\n\\nClassification accuracy on ImageNet-C Gaussian noise level 5 with different $\\\\lambda$.\\n\\n| $\\\\lambda$ | 0.10 | 1.00 | 10.0 | 30.0 | 50.0 | 80.0 | 100 | 150 |\\n| --------- | ----- | ---- | ----- | ----- | ----- | ----- | ---- | ----- |\\n| Acc. | 53.65 | 53.9 | 54.69 | 55.44 | 56.00 | 56.04 | 55.9 | 10.86 |\\n\\nWe observe an additional performance improvement when $\\\\lambda\\\\in[1,100]$. We will add the results to the revised paper.\"}", "{\"comment\": \"Thank the authors for their thorough response, and my gratitude also goes to the other reviewers for their efforts. I maintain my evaluation of this paper, which I believe makes a good contribution by applying uncertainty estimation theory and practices to address the well-known but significant challenge of overconfidence in TTA. The proposed approach is based on established yet highly relevant techniques from uncertainty estimation and calibrated learning. I identify the major contribution as connecting the fields of uncertainty estimation and TTA and solving the long-standing overconfidence problem of EM, with a universal and simple enough modification to each existing pipeline. I find the simplicity and yet effectiveness of the proposed method to be the most compelling feature.\\n\\nThe rebuttal has addressed my concerns regarding W2, W3, and W4. In particular, I appreciate the standard error reporting, highlighting the significance of empirical results. Regarding W1, I suggest the authors clarify their implementation of the belief mechanism in the manuscript, and state any relevant simplifications to the theoretical model assumed for Theorem 1. For W2, I also suggest identifying the limitation of unresolved overconfidence in the pre-trained model.\\n\\nOverall, I maintain my positive evaluation of this paper and vote for its acceptance.\"}", "{\"summary\": \"The paper proposes a Bayesian inference technique to address the overconfidence problem in test-time domain adaptation. Experiments demonstrate its effectiveness, achieving a SOTA performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The motivation of the proposed method is clear.\", \"The idea of transforming the optimization problem with a constraint (Eq. (7)) into a simpler form (Eq. (9)) is interesting and effective, which significantly simplifies the problem.\", \"Several theoretical results, as well as empirical results, are provided. Theorem 1 helps quantitative understanding of the proposed method.\", \"Code is available.\"], \"weaknesses\": [\"Experiments on the dependence of the results on $p$ and $\\\\tau$ are lacking. I would like to see the results and discussions when $p \\\\neq 2$ and $\\\\tau \\\\neq 1$.\", \"(Major) The paper applies a Bayesian inference (Eq. (5)) with a regularization to the overconfidence problem inherent in TTA. The \\\"EM\\\" (entropy minimization) mentioned in the paper is, in a more general context, the unsupervised learning using soft pseudo-label, which has a wide variety of applications beyond TTA.\", \"While the proposed method is technically sound, it would benefit from discussion in a broader context\\u2014such as unsupervised learning with a pretrained model, unsupervised domain adaptation, source-free domain adaptation, and semi-supervised learning\\u2014to emphasize its wide applicability as a key contribution.\", \"(Major) Error bars are missing. Could you provide error bars because several performance gains are marginal?\", \"The performance metric \\\"Avg.\\\" in the tables are nonsense, while it is actually a bad convention in the field. Obviously, a \\\"1% gain\\\" is quite different in iNaturalist and SSB-Hard, for example.\", \"(Minor) A large part of the paper is dedicated to reviewing previously known works, such as the overconfidence problem, which hinders readability.\", \"(Minor) Much of the paper, particularly the description of the proposed method, is redundant. A more direct tone is generally preferable in academic writing.\", \"Overall, while the paper is well-written and the proposed method is interesting and effective, the paper would benefit from a more refined articulation of its contributions and focus. Additionally, the reproducibility issue should be addressed.\"], \"questions\": [\"(Eq. (6)) What if a hyperparameter $\\\\lambda \\\\in \\\\mathbb{R}$ is introduced as $-\\\\sum_{k=1}^K b_k \\\\log b_k - \\\\lambda u \\\\log u$? This seems to be a straightforward generalization of Eq. (6).\", \"Is there any quantitative correspondence between $\\\\tau$ and $\\\\delta$ (Eq. (9) & (7))?\", \"How can we set $p$ and $\\\\tau$ (or $\\\\delta$) in practice?\", \"To my understanding, the regularizer (Eq. (9)) effectively prevents spurious training that would drive the second term in Eq. (6) to zero by enforcing $e \\\\rightarrow 0$. Is this correct?\", \"To me, the proposed method seems to be a simple combination of known techniques (to clarify, I do *not* claim that simplicity alone is grounds for rejection at all). Could you clarify the differences of the proposed technique from other Bayesian approaches, confidence calibration algorithms, semi-supervised learning, and entropy regularization techniques? A quantitative and objective discussion would be preferable, which would significantly enhance the paper's contribution and clarity.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal saue\", \"comment\": \"Thanks for your thoughtful and thorough comments on our paper and for recognizing the contribution of our reasonable idea and the lightweight, model-agnostic, and extremely easy-to-implemented method.\\n\\n- W1. How to ensure $b_k$ is non-negative?\\n\\nThanks for your careful reading. In our implementation, to ensure the non-negativity of $b_k$, we calculate the Dirichlet parameters by $\\\\mathbf{\\\\alpha}= \\\\exp (ReLU(f(x))$. Note that $ReLU(f(x))$ is non-negative, and thus we have $\\\\alpha\\\\geq 1$ and $b_k=\\\\alpha_k-1\\\\geq 0$. This is to simplifies the proof of Theorem 1. Alternatively, we can also use $b_k=\\\\exp(f(x))$ and $\\\\alpha=b_k+1$.\\n\\n- W2. Why does the algorithm keep $u$ close to $u_0$? What if the pretrained model is over-confident?\\n\\nVery insightful question! We agree that the uncertainty estimated by pretrain model may not be ideal. However, please kindly remind that in fully TTA task, **we can only access unlabeled test data coming online and the inference efficiency matters**. Thus traditional methods devised for handling overconfidence like calibration, ensembling, and other Bayesian methods are not applicable. **The only practically available choice is to explore the uncertainty information contained within the model itself.** As shown in previous works [2] [3], while the softmax probability of pretrained model tend to be overconfident, subjective logic is much more reliable, which can support the proposed regularization. We promise to make this point transparent in the manuscript as a future research direction.\\n\\n- W2. What about the alternative constraint $u \\\\geq u_0$?\\n\\nCertainly, constraint $u \\\\geq u_0$ may result in a more conservative prediction. However, encouraging high uncertainty $u$ may result in underconfidence. That is, the model outputs all evidence equal to 0 since we have $u=K/\\\\sum \\\\exp f(x)$. During TTA, the model may encounter both unreliable samples and normal test samples that should be assigned high confidence. Unfortunately, due to the unsupervised fact of TTA, we lack of guidence to increase or decrease the uncertainty. Thus we directly constraint $|u-u_0|\\\\leq \\\\delta$ adhering to Occam's Razor.\\n\\n- W3. Expected calibration error (ECE) is a better metric in this sense.\\n\\nFollowing your suggestion, we conduct additional experiments and report ECE. A brief summary on ImageNet-C Gaussian noise level 5 are shown as follows and the fully results will be updated to the revised paper.\\n\\n| | TENT | SAR | EATA | COTTA |\\n| ------------ | ----- | ----- | ----- | ----- |\\n| Without COME | 26.61 | 19.22 | 21.44 | 19.43 |\\n| With COME | 13.79 | 17.89 | 19.01 | 16.31 |\\n\\n\\n\\n- W4. Are there standard errors of the reported results?\\n\\nAccording to your advice, we ran the experiments multiple times and report the standard errors. The results have been added to Appendix C in the revised paper.\\n\\n- Q1. The tightness for the upper and lower bounds in Lemma 1 is determined by the choice of p. By considering the simple model where f(x) outputs the same logit for all classes, the ratio between the upper and lower bound is minimized by $p=\\\\infty$. Is is a better choice to consider $|f(x)|_{\\\\infty}=\\\\max |f_k(x)|$?\\n\\nWe agree that a larger p can lead to a more strict constraint on $|u-u_0|\\\\leq \\\\delta$. We conduct additional experiments on varying $p$. When using infinity norm, a suboptimal classification accuracy is observed. We suppose this is because an overly strict constraint can be harmful to TTA. Since on reliable test samples, we still expect to reduce the uncertainty (in a conservative manner).\\n\\n| p=1 | p=2 | p=3 | p=$\\\\infty$ |\\n| ---- | ---- | ---- | ---------- |\\n| 53.3 | 53.8 | 53.2 | 47.2 |\\n\\n- The usage of exp transformation of logits in the softmax function appears pivotal to the proof of Theorem 1. And if we take b(x) as a general non-negative function of f(x), the upper bound may reduce to 1. And minimizing the entropy of opinion in equation 6 can also lead to the Dirac distribution, which is over-confident. Is there a characterization for the class of functions that could be used to form a reasonable belief b(x)?\\n\\nThanks for your questions. Here we discuss a few alternative functions to form $b(x)$. Common non-negative activation functions include **ReLU, softplus and exp**. For softplus function we can also derive a similar results as Theorem 1. However, if we calculate $b$ by $b(x)=ReLU(f(x))$, we can only derive a non-tirvial confidence upper bound (less that 1) when all the logits are non-negative. Since when the logits are all negative, $b(x)$ is an all zero vector and $u$ is a constant independent to the model confidence. Besides, we conduct experiments and observe that using exp transformation gains better performance than ReLU and softplus.\\n\\n| RELu | SOFTPLUS | EXP |\\n| ----- | -------- | ----- |\\n| 40.77 | 37.08 | 53.92 |\\n\\n[1] Evidential Deep Learning to Quantify Classification Uncertainty, NIPS'18\\n\\n[2] Predictive uncertainty estimation via prior networks, NIPS'18\"}", "{\"comment\": \"Dear Reviewer 4L2X, since Dec 2nd is the final deadline for public discussion, I would like to know if you have any concerns that we could address to improve the score. I also sincerely wish you great success with your own research.\"}", "{\"comment\": \"I would thank the authors for their reply. However, I still have some major concerns: (1) The overall approach is an approximation of Equation (6) only with some theoretical insights (Lemma 1), and there is no derivation for this. Further, the model confidence upper bound in Theorem 1 is based on the assumption of the original (precise) objective (Equation (6)). I would suggest the authors having better theoretical analysis on their approach. (2) I'm still concerned about the datasets used in this work. Why should we add outliers to test the test-time adaptation methods? From my perspective, we should pay more attention to different kinds of distribution shifts (like WILDS datasets typically used).\\n\\nTherefore, I would like to maintain my score.\"}", "{\"comment\": \"I would thank the authors for their response. However, my major concern still exists. I checked the proof and I did not mean there's no proof for Lemma 1. My major point is that, **there is no proof to show that Lemma 1 guarantees the constraints in Equation (6)**. Furthermore, in Theorem 1, the constraints in Equation (6) is directly assumed. Since your actual algorithm is Equation (8), there is a **mismatch** between them, which cannot justify or provide insights on your actual algorithm.\\n\\nTherefore, I would like to maintain my score.\"}", "{\"comment\": \"Thanks for your timely response. We have added the additional results to Appendix C. The newly intergrated results include:\\n\\n- Full results with standard deviation under standard TTA settings (Table 5 in Appendix C.1, page 17)\\n- Additional comparison under source-free domain adaption settings (Table 7 in Appendix C.3, page 18)\\n- New baselines suggested by the reviewers (Table 8 in Appendix C.4, page 18)\\n- Additional results on varying hyperparameters (Table 9 and 10 in Appendix C.5, page 18-19)\\n\\nIn the latest revision, we highlight the major revisions in blue.\\n\\nDue to space limit, we extract some results from Table 5 with standatd deviation here. The test data is ImageNet-C Gaussian noise level 5 (no adapt acc=35.1).\\n\\n| | Tent | SAR | EATA | COTTA |\\n| ---- | ------------ | ------------- | ------------- | ------------- |\\n| EM | $52.5\\\\pm0.1$ | $51.9\\\\pm 0.1$ | $56.0\\\\pm0.2$ | $40.3\\\\pm0.2$ |\\n| COME | $53.9\\\\pm0.0$ | $56.4\\\\pm0.1$ | $56.1\\\\pm 0.1$ | $43.5\\\\pm 0.0$ |\\n\\nThe source of the standard deviation consists 1) the order in which the test mini-batches coming online and 2) the randomness of the stochastic optimization algorithms, e.g., SGD, Adam. Since in TTA setting, the model is initialized from the publicly available pretrained model weights (i.e., via-base-patch16-224 from google and resnet50 from PyTorch), there is no randomness introduced by model initialization. All the experiments were conducted using random seed 2024, 2025 and 2026. We have added this explaination to Appendix (line 843-847) to enhance reproducibility.\\n\\nThank you for considering our revisions and valuable suggestions. A newer manuscript is in preparation and we will actively working on this project. If you have any other concerns, please feel free to contact us and we look forward to disscuss with you.\"}", "{\"title\": \"Rebuttal 4L2X (2/2)\", \"comment\": \"- What is the exact benefit of the upper bound of model confidence? I think it will also hurt the performance on some \\\"confident\\\" samples.\\n\\nThis is very insightful comments. As shown in Figure 1, EM leads to over-confidence and model collapse. Along the TTA progress, the model finally outputs nearly 100% confidence and collapses. In contrast, COME introduces a sample-wise upper bound on model confidence, which allows the model to get rid of such failure mode. **Theoretically**, we quantitively calculate the upper bound. For simplicity, we assume a Binary classification task and $\\\\delta=0.1$. Typically we have $u_0\\\\approx 0.2$ and $u_0\\\\approx 0.6$ for normal test samples and anomaly outliers respectively. At this time, the upper bound of model confidence for normal test samples is 0.99995, and for anomaly outliers is 0.84113. We can observe that the upper bound for the normal test sample is still rather high. For this reason, we can suspect that such an upper bound would not hurt the performance of confident samples. **Empirically**, we recorded the **maximum model confidence** on test samples from ImageNet-C (1000 classes). When the prediction is correct, the confidence can still be very high (0.91), hence it does not harm performance. Besides, in contrast to entropy minimization, our COME established a **distinguishable** confidence margin between correct and wrong predictions. We will add these analysis in the revision to enhance the clarity of Theorem 1.\\n\\n| | Right | Wrong |\\n| ---- | ------ | ------ |\\n| EM | 0.9997 | 0.9978 |\\n| COME | 0.9187 | 0.8186 |\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe have uploaded the latest revision to improve the presentation quality. We understand the workload that reviewers face, and we appreciate your efforts already put into evaluating our work. If there are any additional insights, questions, or clarifications on our responses and manuscript that you would like to discuss with us, we would be very grateful to hear them, your feedback is valuable for the improvement of our research.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for the reply. The authors addressed most of my concerns, and I changed my score from 3 to 5.\\n\\n> A newer manuscript is in preparation\\n\\nHave the authors revised the paper to resolve my remaining concern\\u2014namely, the presentation issue\\u2014which was also highlighted by other reviewers (and may impede an accurate evaluation of the paper)?\"}", "{\"title\": \"Rebuttal 5e1E (2/2)\", \"comment\": \"- Is there any quantitative correspondence between $\\\\tau$ and $\\\\delta$ ?\\n\\nThese two parameters play independent roles in constraining the uncertainty mass $u$. $\\\\delta$ ensures that the uncertainty does not diverge too far from the pretrained model and thus avoids overly extreme output. Note that the uncertainty mass $u$ is negatively related to $\\\\tau$ since we have $u=K/\\\\sum \\\\exp(f(x))$. That is, $\\\\tau$ can increase or decrease the magnitude of $u$ before minimizing the entropy of opinion in Eq.6. For example, when $\\\\tau$ is 1, $u$ averages to 0.3452 on ImageNet-C Gaussian noise level 5. And if we set $\\\\tau=0.5$, $u$ averages to 0.4556, and at this time, the model will be more likely to reject to adapt most input samples.\\n\\n- How can we set $\\\\tau$ and $\\\\delta$ in practice?\\n\\nBy introducing Lemma 1, we replace the hyperparameter $\\\\delta$ with $p$. In our experiments, we set $\\\\tau=1$ and $p=2$ for simplicity and in accordance with Occam's Razor. For $\\\\tau$, as we mentioned before, it actually controls the magnitude of uncertainty mass. If we have some prior knowledge that most test samples should be rejected to adapt during TTA, we should choose a relatively small $\\\\tau$ and making the model confidence more conservative in this circumstance. As for $\\\\delta$ (or $p$) which represents the tolerance of uncertainty divergence, it should be selected per need by the user via trial and error: if users are extremely cautious about unreliable TTA, $\\\\delta$ should be tuned down and $p$ should be tuned up; otherwise, if a better performance is required.\\n\\n- To my understanding, the regularizer (Eq. (9)) effectively prevents spurious training that would drive the second term in Eq. (6) to zero by enforcing. Is this correct?\\n\\nYes. For unreliable samples of which the model outputs a relatively greater $u$, during entropy minimizing, the model will tend to increase $u$ and thus reject to train on such unreliable samples.\\n\\n- Could you clarify the differences of the proposed technique from other Bayesian approaches, confidence calibration algorithms, semi-supervised learning, and entropy regularization techniques? A quantitative and objective discussion would be preferable, which would significantly enhance the paper's contribution and clarity.\\n\\nThanks for this valuable suggestion. (1) Compared to other Bayesian TTA methods like using Bayesian Neural Networks [2], or ensembling [3] which involves multiple models or inferences, the proposed COME does not need additional inference cost or modifying the model architecture which meets the demand of TTA. (2) Calibration methods typically rely on an individual validation set for tuning the temperature. However, during TTA, we can only access (batches of) unlabeled test samples coming online, thus calibration is not applicable. (3) Compared to semi-supervised learning like entropy regularization, our COME is a refinement of classic entropy minimization which is specified for online TTA settings. We provide a comprehensive comparison of these methods from the aspects of **application scenarios and empirical performance**.\\n\\nComparison of the application scenarios of calibration, BNNs or ensemble based TTA, source-free domain adaption and COME on the dependency on labeled data, applicability for online TTA and additional inference latency.\\n\\n| Methods | labeled data | online | Inference latency |\\n| -------------- | ------------ | ------- | ----------------- |\\n| Calibration | Yes | No | No |\\n| BNNs | Yes | No | Yes |\\n| Ensemble | No | Yes | Yes |\\n| Source-free DA | No | No | No |\\n| **COME** | **No** | **Yes** | **No** |\\n\\nClassfication performance comparison with other bayesian insipired TTA methods, i.e., BACS, in **TTA setting**. The backbone is resnet50v2. Test data is ImageNet-C. The results of BACS [3] is copied from the original paper since it does not release the source code. Please kindly remind that BACS typically using an ensemble of 10 models thus is less efficient that our model-agnostic method.\\n\\n| No adapt | BN | BACS | TENT | COME+Tent | COME+EATA |\\n| -------- | ---- | ---- | ---- | --------- | --------- |\\n| 47.3 | 47.6 | 56.1 | 48.9 | 51.1 | **58.2** |\\n\\nClassification accuracy comparison with other learning objectives originate from semi-supervised learning, unsupervised learning in **TTA setting**. We implement all methods based on Tent and test on ImageNet-C Gaussian level 5.\\n\\n| No adapt | PL | EM | IM | COME |\\n| -------- | ----- | ----- | ----- | ----- |\\n| 35.12 | 49.85 | 52.39 | 50.98 | 53.77 |\\n\\n[1] Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation, ICML'20\\n\\n[2] Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation, NeurIPS'22\\n\\n[3] Bayesian Adaptation for Covariate Shift, NeurIPS'21\"}", "{\"title\": \"Rebuttal 4L2X (1/2)\", \"comment\": \"Thank you for valuable comments and recognizing of well-motivated paper and extensive experiments. Your comments are very helpful for us to improve the quality of our manuscript.\\n\\n- W1. Why Dirichlet distribution is used?\\n\\nIn this work, we use Dirichlet distribution since it serves a prior distribution of categorical distribution in the Bayesian framework. Thus it is the most natural choice for modeling uncertainty of the predicted categoricals. There also exists several optional distributions, such as mixture of Dirichlet or the Logistic-Normal distribution. **We make such choice due to the tractable analytic properties of Dirichlet.** Besides, as we mentioned in lines 246-256, **compared to other Bayesian uncertainty modeling methods, using Dirichlet and SL for uncertainty modeling is model-agnostic and light-weight**, which meets the efficiency repuirements of TTA task.\\n\\n- W2. What is the role of $\\\\delta$ ?\\n\\nThanks for your comments regarding $\\\\delta$. As we mentioned in lines 281-285, **the role of $\\\\delta$ is to constrain the uncertainty mass do not diverge too far away from the pretrain model in unsupervised TTA progress, which can prevent overly extreme model uncertainty**. The magnitude of $\\\\delta$ represents our tolerance for uncertainty divergence. In lines 292-310, we propose to avoid to tune $\\\\delta$ as a hyperparameter by Lemma 1. This significantly simplifies the problem, as recognized by Reviewer 5e1E.\\n\\n- Could the authors provide a more detailed (theoretical) comparison between the proposed method and traditional EM? What is the benefit?\\n\\nThe proposed COME enjoys several advantages over EM. Please kindly remind that as mentioned in lines 265-269, compared to EM, COME allows the model to express high overall uncertainty and reject to adapt the test sample when the total evidence is insufficient. Besides, in lines 330-341, we provide a rigorous theoretical analysis on the model confidence. Our COME upper bounds model confidence during adaptation is based on the sample-specific confidence of the initial model. In contrast, the confidence of EM increases rapidly, which leading to model collapse as shown in Figure 1.\\n\\n- Some baselines are missing [1] [2].\\n\\nThanks for your actionable suggestion. According to your advice, we conduct additional experiments involving the mentioned two baselines. Our COME outperforms them. Please kindly remind that **1) [2] has not made their source code publicly available and 2) most of our experiments are conducted on standard ViT thus [1] is not applicable since there is no batch normalize layers**. For a fair comparison, we ran the proposed COME using the same backbone (resnet50v2) and dataset (ImageNet-C) as in [2] and report the average accuracy. The results of [2] are copied from the original paper.\\n\\n| No adapt | BN | BACS | TENT | COME+Tent | COME+EATA |\\n| -------- | ---- | ---- | ---- | --------- | --------- |\\n| 47.3 | 47.6 | 56.1 | 48.9 | 51.1 | **58.2** |\\n\\nIt is worth noting that [2] using an ensemble of **10 models** for TTA which introduces noticable inference latency. In constrast, our COME is desigined for **single** model TTA which is much more efficient.\\n\\n- I'm curious why the authors follow literatures on outliers detection.\\n\\nIn this paper, besides standard TTA settings, we also consider open-world TTA tasks where there exists outliers in the test data. This setting is realistic and also considered in recent TTA works [3] [4] [5] as well as OOD generalization works [6] [7] [8]. We will clarify this in the revision to address your comments and avoid confusion.\\n\\n[1] Evaluating prediction-time batch normalization for robustness under covariate shift. arXiv preprint arXiv:2006.10963. \\n\\n[2] Bayesian adaptation for covariate shift. NIPS'21\\n\\n[3] Towards open-set test-time adaptation utilizing the wisdom of crowds in entropy minimization. CVPR'23\\n\\n[4] Unified Entropy Optimization for Open-Set Test-Time Adaptation. CVPR'24\\n\\n[5] ATTA: anomaly-aware test-time adaptation for out-of-distribution detection in segmentation. NIPS'23\\n\\n[6] Feed two birds with one scone: Exploiting wild data for both out-of-distribution generalization and detection. ICML'23\\n\\n[7] Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization. ICML'24\\n\\n[8] Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains. CVPR'24\"}", "{\"title\": \"Thanks for your support!\", \"comment\": \"Thank you for your support and insightful suggestions. I am happy to hear that all your concerns have been addressed. I will definitely explore foundation models in the future work. To me, generalization is one of the most fundamental and attractive challenge in machine learning. As a new in this field, I feel very lucky and motivated in discussion with you. Thanks again for your feedback and best regards.\"}", "{\"comment\": \"Thank you for sharing the revised paper. I have read it and confirmed the readability has been much improved. I changed my score from 5 to 6 because all the major and minor concerns raised in my first review have been resolved.\\n\\nI would like to share an additional comment below. Incorporating it into the paper would be impossible due to time limitation, but I hope they help.\\n\\nThe current pretrained source models are ImageNet-pretrained ViT and ResNet. In view of recent trends in domain adaptation and related areas, I would like to recommend using rich foundation models, such as DINOv2 etc., which are expected to extract features that are much more robust against domain shifts than ImageNet-pretrained models. In real-world scenarios, using foundation models as the base architecture is more practical, to my understanding, because they potentially fill domain gaps without using any domain adaptation techniques. While foundation models are sometimes criticized for their large number of parameters, smaller, distilled models are sometimes available, as seen with DINOv2. In fact, a small version (ViT-S/14 distilled in https://github.com/facebookresearch/dinov2) is as small as ResNet-50, addressing the latency problem. ImageNet-pretrained ViT and ResNet have been used as base architectures in this field, but they will become obsolete in near future, in my humble opinion.\\n\\nI wish you the best of luck with your work.\"}" ] }
4zygH3k8Zr
Replacement Learning: Training Vision Tasks with Fewer Learnable Parameters
[ "Yuming Zhang", "Peizhe Wang", "Shouxin Zhang", "Dongzhi Guan", "Jiabin Liu", "Junhao Su" ]
Traditional end-to-end deep learning models often enhance feature representation and overall performance by increasing the depth and complexity of the network during training. However, this approach inevitably introduces issues of parameter redundancy and resource inefficiency, especially in deeper networks. While existing works attempt to skip certain redundant layers to alleviate these problems, challenges related to poor performance, computational complexity, and inefficient memory usage remain. To address these issues, we propose an innovative training approach called Replacement Learning, which mitigates these limitations by completely replacing all the parameters of the frozen layers with only two learnable parameters. Specifically, Replacement Learning selectively freezes the parameters of certain layers, and the frozen layers utilize parameters from adjacent layers, updating them through a parameter integration mechanism controlled by two learnable parameters. This method leverages information from surrounding structures, reduces computation, conserves GPU memory, and maintains a balance between historical context and new inputs, ultimately enhancing overall model performance. We conducted experiments across four benchmark datasets, including CIFAR-10, STL-10, SVHN, and ImageNet, utilizing various architectures such as CNNs and ViTs to validate the effectiveness of Replacement Learning. Experimental results demonstrate that our approach reduces the number of parameters, training time, and memory consumption while completely surpassing the performance of end-to-end training.
[ "Machine Learning", "Deep Learning", "Foundation Models" ]
https://openreview.net/pdf?id=4zygH3k8Zr
https://openreview.net/forum?id=4zygH3k8Zr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xrqVFbMJGb", "vvqG5Sghtl", "otfpHDGoNn", "dmQLixBJ3b", "RoBbjcYigt", "74lflSWwZA" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730577416936, 1730289887488, 1729713037709, 1731724742144, 1730795007783, 1730680706300 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2302/Reviewer_aFiP" ], [ "ICLR.cc/2025/Conference/Submission2302/Reviewer_8J38" ], [ "ICLR.cc/2025/Conference/Submission2302/Reviewer_S4Bg" ], [ "ICLR.cc/2025/Conference/Submission2302/Authors" ], [ "ICLR.cc/2025/Conference/Submission2302/Reviewer_7Uue" ], [ "ICLR.cc/2025/Conference/Submission2302/Reviewer_tFZt" ] ], "structured_content_str": [ "{\"summary\": \"In this work, the authors aim to improve network efficiency and reduce parameter redundancy by introducing Replacement Learning, a straightforward approach that fixes the parameters in a layer by interpolating between two adjacent layers. The experimental results indicate that this method is effective\\u2014although I find these results somewhat difficult to believe.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"I\\u2019d say the idea is somewhat interesting, and the paper follows the ICLR submission style.\", \"weaknesses\": \"1. Figure 1 doesn\\u2019t make any sense\\u2014using very large models on very small datasets. It would be more meaningful to test on a larger dataset like ImageNet, with different models of different scales for comparison.\\n\\n2. what is the motivation of such a method?\\n\\n3. There are several unsupported and arbitrary statements in the paper. For example, in Line 83, \\\"Considering that parameters from adjacent layers, if solely derived from either shallow or deep layers, often fail to simultaneously enable frozen layers to excel in learning both local and global features\\\" lacks justification. The relationship to local or global features is unclear here, as adjacent layers don\\u2019t inherently correspond to shallow or deep feature characteristics.\\n\\n4. In Eq. 9, the parameters of the i-th layer are a linear combination of those from the previous and next layers. What\\u2019s the motivation behind? It\\u2019s also unclear why this setup would yield better performance, as shown in Table 1 and Table 2.\\n\\n5. What is the motivation for providing detailed information about backpropagation? I don\\u2019t see any differences or novel contributions in this section.\\n\\n6. In the experiments, I don\\u2019t understand how reducing the number of parameters in a model can consistently improve performance across different datasets and models. This seems completely unreasonable and doesn\\u2019t make sense at all, especially I didn't see anything can help with this.\\n\\n7. BTW, the results on CIFAR-10, SVHN, and STL-10 aren\\u2019t reliable, as these datasets are too small to provide meaningful insights.\\n\\n8. Why is k=4 chosen? There should be an ablation study to justify this choice. Additionally, why is there a frozen layer every k layers throughout the networks? I assume we can select layers.\\n\\n9. What happens if the frozen layer is the last layer in a ResNet stage? How would parameter interpolation be handled in this case?\\n\\n10. The experimental settings MUST have issues. All the results for the vanilla models are SIGNIFICANTLY lower than original papers. This calls into question the reliability of the results presented in the paper, potentially not only ImageNet.\\n\\n11. Typo\\uff1a feature maps can be found in Figure 4.3.1. \\n\\n12. I cannot tell any essiential differences among the four images in Fig. 3.\", \"questions\": \"I must express my disappointment after spending several hours reviewing such a submission.\\n\\nI suggest that the authors reconsider the motivation behind the proposed method and conduct all experiments with greater seriousness and care, as many results currently seem unreasonable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method of replacement Learning, which reduces computational overhead and resource consumption in deep learning. It enhances model performance while surpassing end-to-end training in efficiency and memory usage. The method has been validated on various datasets and architectures, showing its versatility and potential for broader application.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The writing of the article is quite good and the article is logical.\\n2. The experiments on the classification task seem to be adequate.\\n3. The charts and graphs are more beautiful and properly presented.\\n4. Supplementary materials are substantial.\", \"weaknesses\": \"1. Title: The authors use \\\"Visual Tasks\\\" in the title. \\\"Visual Tasks\\\" include multiple tasks such as classification, detection, segmentation, etc., but it seems that the paper is only validated on the classification task. I suggest adding other tasks to the paper, as has been done in several recent PEFT works [1-3].\\n[1] 1% vs 100%: Parameter-efficient low rank adapter for dense predictions.\\n[2] Pro-tuning: Unified prompt tuning for vision tasks.\\n[3] Adapter is all you need for tuning visual tasks.\\n[4] Parameter-efficient is not sufficient: Exploring parameter, memory, and time efficient adapter tuning for dense predictions.\\n\\n2. Related work: the authors perhaps left out some of the most recent work of parameter-efficient fine tuning (PEFT). \\n3. Experiments: (1) Experiments are performed only on classification tasks; (2) More parameter-efficient fine-tuning methods are available for comparison.\\n\\nI would consider increasing the score if the authors could provide more convincing comparative experiments.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduced a neural network learning mechanism - Replacement Learning that replaces the parameters of selected layers with just 2 parameters a,b. The output of that layer is computer by a linear model a* activations of previous layer + b* activations of next layer. Given that similar layers in neural network produce correlated outputs, this linear combination approximates the replaced layer's outputs. The method reduces the parameter count, throughput and increases accuracy for image classification datasets(CIFAR-10,STL-10,SVHN and Imagenet-1k).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Replacement Learning introduces a novel approach for training more efficient models with lesser number of parameters. The parameter integration has good future potential, especially when applied to structured pruning and upon supporting transfer learning.\\n\\n2. The experiments show good performance for image classification tasks for both convolutional and transformer networks for accuracy, memory and time for 1 epoch.\\n3. The paper presents ablations to determine the robustness of the method and also presents complexity analysis of the method.\\n4. The paper addresses the most experimental parameters in the appendix, exhibiting good reproducibility.\", \"weaknesses\": \"1. The primary concern with the paper is that the experiments are conducted by training networks from scratch. Transfer learning boosts the performance of classification tasks, for instance in the official vit paper[1], VIT B/16 upon transfer learning on CIFAR10 reached an accuracy of 98.13% while the paper achieves only 72.86%, therefore it is important to perform an experiment for examining the effect of replacement learning when transfer learnt. One possible reason why the paper omits trying transfer learning can be initialization of replaced parameters. Parameter Integration $a * \\\\theta_{i-1}+ b* \\\\theta_{i+1}$ is a linear model, this allows approximating a,b values for pre-trained layers in few steps.\\n\\n2. The paper only shows results on classification, effect of replacement learning on downstream tasks such as object detection and segmentation would strengthen the approach.\\n\\n3. The flow of gradients is not in a singular direction during replacment learning as suggested by Eq. 15 where $\\\\delta_{i}$ is used for gradient computation of $\\\\delta_{i+1}$. This could cause issues such as vanishing or exploding gradients and must be given a look at. \\n\\n4. While the experiments show better time per epoch, the effect of replacement learning on convergence must be studied. This is important as time for 1 epoch can be misleading when the number of epochs to converge is significantly higher than backpropagation.\\n\\n5. While the paper addresses that only MSA layers were chosen for replacement, where $\\\\theta_{i-1}$ and $\\\\theta_{i+1}$ also MSA layers? This is important as [2] does not guarantee correlation between MSA and MLP layers.\", \"nitpick\": \"some figure references are wrong, Figures in the ablation display figure numbers 4.3.1, 4.3.2 and 4.3.3 which need to be corrected to figure 3,4,5. Usually it is the placement of caption and label in \\\\begin{figure} that causes this issue.\", \"references\": \"[1] Dosovitskiy, Alexey. \\\"An image is worth 16x16 words: Transformers for image recognition at scale.\\\" arXiv preprint arXiv:2010.11929 (2020).\\n[2] Venkataramanan, Shashanka, et al. \\\"Skip-attention: Improving vision transformers by paying less attention.\\\" arXiv preprint arXiv:2301.02240 (2023).\", \"questions\": \"I would improve my rating if experiments are performed for points 1 and 4 in weakness and clearing my questions in points, 3 and 5.\", \"i_also_have_a_minor_question\": \"While it is clear both $\\\\theta_{i-1}$ and $\\\\theta_{i+1}$ are useful for approximating $\\\\theta_i$ why not expand it to $\\\\theta_{i-n}$ and $\\\\theta_{i+n}$, this would increase the global context of the computations with minimal increase in parameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces Replacement Learning, a novel training technique aimed at reducing the number of learnable parameters in deep learning models while maintaining or even enhancing model performance. The approach specifically targets the limitations of traditional end-to-end training, such as high computational demand, memory usage, and parameter redundancy, which are common in deeper architectures. Rather than updating all parameters during backpropagation, Replacement Learning freezes certain layers and uses parameters from adjacent layers, controlled by two learnable parameters, to inform the frozen layers through a parameter integration mechanism. This design enables the frozen layers to leverage both local and global feature representations, balancing historical context with new inputs while reducing memory and computational costs.\\n\\nThe authors conduct experiments across multiple image classification datasets (CIFAR-10, STL-10, SVHN, and ImageNet) using various architectures, including CNNs and Vision Transformers (ViTs). Results demonstrate that Replacement Learning reduces GPU memory usage, training time, and the number of parameters, while achieving higher accuracy than traditional end-to-end training. Furthermore, the method shows versatility, adapting effectively across different architectures and datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. A novel training strategy that replaces frozen layer parameters with a fusion of parameters from neighboring layers, controlled by two learnable parameters, reducing the training load without compromising performance.\\n2. Replacement Learning shows substantial savings in memory usage and training time and achieves better accuracy than end-to-end training.\\n3. The method performs well across diverse architectures (CNNs and ViTs) and datasets, suggesting broad applicability.\\n4. Extensive experiments on benchmark datasets confirm the effectiveness of Replacement Learning in surpassing the performance of standard training approaches.\", \"weaknesses\": \"I like the paper overall. However, would like to point some weaknesses which the authors have also mentioned in their limitations section:\\n1. While effective on image-based tasks, the approach has not yet been tested on other domains such as NLP or multimodal tasks, which limits its generalizability.\\n2. The paper could benefit from a more in-depth discussion of any limitations associated with freezing certain layers and its impact on long-term learning dependencies, especially in deeper networks.\", \"questions\": \"I like the paper overall and don't have any major questions to ask.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an efficient training method for deep neural networks, named Replacement Learning, which aims to reduce the number of trainable parameters, training time, and memory consumption. Replacement Learning achieves this by selectively freezing the parameters of certain layers, which then utilize parameters from adjacent layers updated through a parameter integration mechanism controlled by just two learnable parameters. This method leverages information from surrounding structures to enhance overall model performance while reducing computation and conserving GPU memory.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly written and is generally easy to follow.\\n\\n2. The problem being studied in the paper is becoming increasingly important recently.\\n\\n3. The idea is simple yet seems to be something that people haven\\u2019t tried before.\", \"weaknesses\": \"1. The proposed method only marginally reduces the GPU memory consumption and training time compared to the baseline training method.\\n\\n2. The paper did not compare the proposed method with any other parameter-efficient training methods, such as [3].\\n\\n3. Although the paper discusses related works on alternative backpropagation methods and training utilizing surrounding layers, none of the related works are compared with the proposed methods in the experiments.\\n\\n4. Parameter-efficient training methods [1, 2] are widely applied in fine-tuning pre-trained networks by selectively updating a small subset of model parameters, streamlining the adaptation process of pre-trained models, and facilitating rapid deployment across various domains. However, this paper only studies the setting for training-from-scratch.\\n\\n[1] Zhang, Taolin, et al. \\\"Parameter-efficient and memory-efficient tuning for vision transformer: a disentangled approach.\\\" ECCV 2024.\\n\\n[2] He, Xuehai, et al. \\\"Parameter-efficient model adaptation for vision transformers.\\\" AAAI 2023.\\n\\n[3] Mostafa, Hesham, and Xin Wang. \\\"Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.\\\" ICML 2019.\", \"questions\": \"1. How does the value of $k$ impact the performance of the models? The author should perform an ablation study on this value.\\n\\n2. How does the proposed method compare with other parameter-efficient training methods?\\n\\n3. How does the proposed method compare with other alternative backpropagation methods and methods utilizing surrounding layers during training?\\n\\n4. Can this method be applied to fine-tuning pre-trained networks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4zQ5eIPtMp
Self-Exploring Language Models: Active Preference Elicitation for Online Alignment
[ "Shenao Zhang", "Donghan Yu", "Hiteshi Sharma", "Han Zhong", "Zhihan Liu", "Ziyi Yang", "Shuohang Wang", "Hany Hassan Awadalla", "Zhaoran Wang" ]
Preference optimization, particularly through Reinforcement Learning from Human Feedback (RLHF), has achieved significant success in aligning Large Language Models (LLMs) to adhere to human intentions. Unlike offline alignment with a fixed dataset, online feedback collection from humans or AI on model generations typically leads to more capable reward models and better-aligned LLMs through an iterative process. However, achieving a globally accurate reward model requires systematic exploration to generate diverse responses that span the vast space of natural language. Random sampling from standard reward-maximizing LLMs alone is insufficient to fulfill this requirement. To address this issue, we propose a bilevel objective optimistically biased towards potentially high-reward responses to actively explore out-of-distribution regions. By solving the inner-level problem with the reparameterized reward function, the resulting algorithm, named Self-Exploring Language Models (SELM), eliminates the need for a separate RM and iteratively updates the LLM with a straightforward objective. Compared to Direct Preference Optimization (DPO), the SELM objective reduces indiscriminate favor of unseen extrapolations and enhances exploration efficiency. Our experimental results demonstrate that when fine-tuned on Zephyr-7B-SFT and Llama-3-8B-Instruct models, SELM significantly boosts the performance on instruction-following benchmarks such as MT-Bench and AlpacaEval 2.0, as well as various standard academic benchmarks in different settings.
[ "Online Alignment", "Large Language Model" ]
https://openreview.net/pdf?id=4zQ5eIPtMp
https://openreview.net/forum?id=4zQ5eIPtMp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s7owjIl3uM" ], "note_type": [ "comment" ], "note_created": [ 1729529231061 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8758/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
4z3IguA4Zg
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
[ "Chenxi Wang", "Xiang Chen", "Ningyu Zhang", "Bozhong Tian", "Haoming Xu", "Shumin Deng", "Huajun Chen" ]
Multimodal Large Language Models (MLLMs) frequently exhibit hallucination phenomena, but the underlying reasons remain poorly understood. In this paper, we present an empirical analysis and find that, although MLLMs incorrectly generate the objects in the final output, they are actually able to recognize visual objects in the preceding layers. We speculate that this may be due to the strong knowledge priors of the language model suppressing the visual information, leading to hallucinations. Motivated by this, we propose a novel dynamic correction decoding method for MLLMs DeCo, which adaptively selects the appropriate preceding layers and proportionally integrates knowledge into the final layer to adjust the output logits. Note that DeCo is model agnostic and can be seamlessly incorporated with various classic decoding strategies and applied to different MLLMs. We evaluate DeCo on widely-used benchmarks, demonstrating that it can reduce hallucination rates by a large margin compared to baselines, highlighting its potential to mitigate hallucinations. Code is available at https://github.com/zjunlp/DeCo.
[ "Hallucination Mitigation", "Multimodal Large Language Models", "Decoding Strategy" ]
Accept (Poster)
https://openreview.net/pdf?id=4z3IguA4Zg
https://openreview.net/forum?id=4z3IguA4Zg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jD0HK46qz6", "gdi2i5S8e9", "c7mGjg71uv", "VjHJaFt7cN", "REb8ztCB1E", "3VGFPTf3Dk" ], "note_type": [ "official_review", "official_review", "meta_review", "decision", "official_review", "official_review" ], "note_created": [ 1730439752656, 1729221094382, 1734673061338, 1737523434275, 1730620510265, 1730616581155 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1076/Reviewer_jCUa" ], [ "ICLR.cc/2025/Conference/Submission1076/Reviewer_QBQj" ], [ "ICLR.cc/2025/Conference/Submission1076/Area_Chair_98HL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1076/Reviewer_vNi9" ], [ "ICLR.cc/2025/Conference/Submission1076/Reviewer_tgHP" ] ], "structured_content_str": [ "{\"summary\": \"This paper makes the observation that MLLMs' internal prior suppresses the visual information, thus leading to hallucination. Besides, they empirically observe that intermediate layers may have less such suppression. Motivated by this observation, this work proposes to combine intermediate logits with final layer projection, and demonstrate improvement in reducing hallucination via empirical study.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work makes an interesting observation of how visual information exists in intermediate layers, and then overridden by knowledge prior closer to the output\", \"The proposed mitigation method is lightweight and efficient.\", \"The experimental results are in general better than baselines.\"], \"weaknesses\": \"Although the presentation has a focus about image-conditioned generative language model, the methodology for finding 1 and 2, as well as the proposed layer selection and probability correction, are modality agnostic. The findings are mostly empirical, and it's unclear whether this is a general phenomenum for other models in the same size, nor for models in other sizes.\\n\\nThere has been quite a few literature in studying LLM's internal presentation and hallucination, only selectively listing a few as [1-5] . What the multi-modal setting brings is the strong conditional dependency, while for text-only use cases there might or might not be informative conditions. An analytical comparison on how an MLLM focuses or ignores input conditions can be more informative and persuasive in supporting the methodology. \\n\\nIn L191-L201 the paper compares the token output with and without the image condition. However this has been studied thouroughly in [6], which also proposes hallucination detection and mitigation method.\\n\\nThe method design also seems ad-hoc, there are thresholds in Eq2 and Eq3, layer interval a, b in Eq4 and the weight $\\\\alpha$ in Eq7. Together they contribute to amplifying the concern in the generalizability of the proposed method.\\n\\nI suggest to connect the empirical evidences in this paper to 1/ evidences from other papers with the same spirit, and 2/ the unique property and behavior of conditional multi-modal modeling. \\n\\n**Reference**\\n\\n[1] Liu, Junteng, et al. \\\"On the Universal Truthfulness Hyperplane Inside LLMs.\\\" arXiv preprint arXiv:2407.08582 (2024).\\n\\n[2] Li, Kenneth, et al. \\\"Inference-time intervention: Eliciting truthful answers from a language model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[3] Zhang, Tianhang, et al. \\\"Enhancing uncertainty-based hallucination detection with stronger focus.\\\" arXiv preprint arXiv:2311.13230 (2023).\\n\\n[4] Azaria, Amos, and Tom Mitchell. \\\"The internal state of an LLM knows when it's lying.\\\" arXiv preprint arXiv:2304.13734 (2023).\\n\\n[5] Duan, Hanyu, Yi Yang, and Kar Yan Tam. \\\"Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States.\\\" arXiv preprint arXiv:2402.09733 (2024).\\n\\n[6] Favero, Alessandro, et al. \\\"Multi-modal hallucination control by visual information grounding.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": [\"It's unclear to me how is the affine layer $\\\\phi$, in L104, initialized and trained, if at all? If it needs training, then it seems each layer needs such a layer. If it doesn't, then how do we make sure that resentation across layers can share the same mapping to present token probability?\", \"POPE evaluates answers in Yes/No. How could decoding strategy have impact on the performance for this benchmark?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates why MLLMs generate hallucinations, particularly in image captioning tasks, and introduces DeCo, an innovative method that leverages knowledge from earlier layers to reduce hallucinations during inference. However, I still have some concerns about this article, specifically in regard to the weaknesses.\\nIf these concerns are addressed, I will consider raising my score.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a detailed examination of why MLLMs generate non-existent objects, offering valuable insights into the hallucination issue in image captioning tasks.\", \"The introduction of DeCo is innovative, using preceding-layer knowledge to reduce hallucinations during inference, effectively improving output accuracy.\", \"The method of probing across transformer layers reveals how hallucinations emerge in later layers, helping to understand MLLM behavior better.\"], \"weaknesses\": [\"I am confused by the experimental results about POPE in Table 3, as they do not seem to fully align with the result from LLAVA 1.5.\", \"The authors did not perform more extensive evaluations on comprehensive benchmark such as MMbench and MMVet, which are crucial for assessing the model's overall performance.\"], \"questions\": [\"Could you clarify whether Finding 1 in Section 2.1 in the section is related to the methodology of the paper? It seems that embedding-level knowledge wasn't used to assist the model.\", \"Could you follow LLAVA\\u2019s setting and conduct more extensive evaluations on comprehensive benchmarks like MMbench and MMVet, given their importance for assessing the model's overall performance?\", \"Could the authors clarify the specific settings followed in the experiments presented in Table 3? How do these settings differ from those used in LLaVA?\", \"Is this decoding method useful in more advanced VLLMs, such as Qwen-VL, VILA, etc.?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This works shows that though MLLMs could generate incorrect outputs in the final layer, they could effectively recognise objects in preceding layers. Thus, this work introduces a dynamic correction decoding approach for MLLMs to adaptively choose relevant layers and integrate their knowledge into the final layer to adjust outputs. This insight provides a new layer-wise view on the hallucination problem in MLLMs. The proposed method is lightweight and efficient, and the results show the efficacy of such a method. All the reviewers recommend acceptance. In the camera ready version, authors need to carefully improve the paper following reviewers suggestions, including adding in-depth analysis of the model and statistical analysis.\", \"additional_comments_on_reviewer_discussion\": \"Authors replied reviewers' questions about adding implementation details, adding additional metrics, adding additional benchmarks etc. After rebuttal, all reviewers are happy with the work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper introduces DeCo (Dynamic Correction Decoding), a decoding technique to mitigate hallucinations in Multimodal Large Language Models (MLLMs). The authors identify that MLLMs are capable of recognizing objects in earlier layers, but this recognition is suppressed in deeper layers by strong language model priors, which leads to hallucinations. DeCo dynamically integrates the output from preceding layers, which contain higher probabilities for ground-truth tokens, into the final layer logits to enhance visual grounding and suppress hallucinations. Experimental results on datasets such as CHAIR, POPE, MME and GPT-4o assisted evaluation demonstrate DeCo\\u2019s significant improvements over baselines in hallucination suppression across multiple MLLMs, with manageable latency increases, highlighting its practical applicability.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The authors demonstrate through probing experiments that MLLMs can recognize objects in earlier layers but tend to \\u201cforget\\u201d this information due to language model priors in deeper layers, leading to hallucinations. This insight offers a novel layer-wise perspective on the hallucination mechanism in MLLMs.\", \"The figures illustrating token probabilities across transformer layers effectively highlight the trends for hallucinated versus non-hallucinated tokens, making the analysis accessible and informative.\", \"Compared to existing methods like VCD and OPERA, DeCo achieves similar or better hallucination suppression with lower latency overhead, enhancing its practicality for real-world applications.\", \"Evaluation across diverse benchmarks (CHAIR, POPE, and MME) and several models (InstructBLIP, MiniGPT-4, LLaVA-1.5, and Qwen-VL) provides a well-rounded assessment of DeCo\\u2019s effectiveness.\"], \"weaknesses\": [\"In Figure 9, the response includes awkward repetition, with \\\"The horse statue is positioned on top of the chair\\\" stated multiple times. This raises questions about the effectiveness of the chosen \\u03b1\\\\alpha\\u03b1 value in avoiding repetitive language, as the authors indicated that high \\\\alpha values could increase repetition.\", \"In Figure 10, DeCo reduces a significant hallucination (misidentifying a lift as a \\\"chair\\\"), but the output still contains a hallucination about \\\"several other people visible in the background.\\\" This discrepancy between benchmark performance and qualitative examples suggests that DeCo\\u2019s effectiveness might not fully translate into consistently accurate real-world responses.\", \"For each time step tt, language tokens that are not related to the visual input but are essential for sentence generation may be influenced as they pass through the proposed method. There appears to be a lack of investigation into the nature of this influence.\"], \"questions\": [\"Given that DeCo's effectiveness depends on selecting an optimal layer range (e.g., 20-28), does the layer range need tuning for different MLLMs?\", \"Could you provide more details on the selection process for the 500 images used in experiments? Additionally, which split(s) were used to determine and evaluate the hyperparameters, and were any specific criteria applied for these selections?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper demonstrates that while MLLMs may produce incorrect target outputs in the final layer, they effectively recognize visual objects in the preceding layers. The authors propose a dynamic correction decoding method for MLLMs (DeCo), which adaptively selects relevant preceding layers and proportionally integrates their knowledge into the final layer to adjust the output logits. The proposed method outperforms existing approaches on public datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation seems interesting.\\n2. The paper is well written and easy to follow. The diagrams are essential to understanding this paper.\\n3. This paper achieves good results on existing datasets.\\n4. The main technical pipeline is clear.\", \"weaknesses\": \"1. Although the experiments indicate improved performance in preceding layers, I am concerned about the coherence and richness of the text generated at these stages. Could you provide further evaluation metrics for text quality, such as BLEU or other relevant scores?\\n2. In Figure 1(b), the interval [10, 20] appears optimal, yet in Figure 7(b), [17, 28] shows better performance. Could you clarify this discrepancy?\\n3. Could you provide more evidence to demonstrate how dynamic soft modulation prevents abrupt changes in logits? Additional ablation studies might further substantiate this claim.\\n4. Could you share detailed MME results to highlight the method's performance across different subtasks?\", \"questions\": \"My primary concern lies in the potentially low quality of text generated from the preceding layers. I will be happy to raise my score if my current questions and concerns can be addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4ytRL3HJrq
Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning
[ "Nan Jiang", "Chengxiao Wang", "Kevin Liu", "Xiangzhe Xu", "Lin Tan", "Xiangyu Zhang", "Petr Babkin" ]
Binary code analysis is the foundation of crucial tasks in the security domain; thus building effective binary analysis techniques is more important than ever. Large language models (LLMs) although have brought impressive improvement to source code tasks, do not directly generalize to assembly code due to the unique challenges of assembly: (1) the low information density of assembly and (2) the diverse optimizations in assembly code. To overcome these challenges, this work proposes a hierarchical attention mechanism that builds attention summaries to capture the semantics more effectively and designs contrastive learning objectives to train LLMs to learn assembly optimization. Equipped with these techniques, this work develops Nova, a generative LLM for assembly code. Nova outperforms existing techniques on binary code decompilation by up to 14.84 -- 21.58% higher Pass@1 and Pass@10, and outperforms the latest binary code similarity detection techniques by up to 6.17% Recall@1, showing promising abilities on both assembly generation and understanding tasks.
[ "large language model", "hierarchical attention", "contrastive learning", "assembly code" ]
Accept (Poster)
https://openreview.net/pdf?id=4ytRL3HJrq
https://openreview.net/forum?id=4ytRL3HJrq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLbFuEOuQl", "yMaDZLt4Bj", "vEQJlYD66M", "twtWNd8UFm", "o1Z4HYtojd", "nukDO1s3WE", "nALJigPDLN", "mpa5psXQql", "modxgZgXGm", "mBqWvQWnHl", "ldPyLwybT0", "aWbMGSgyAw", "Xvle72TClz", "UNLhxZpxdr", "Re1s7c79It", "NHsQ7L6VAu", "9Ng00QOfon", "88KucOADA5" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730616673676, 1732252325032, 1732679384770, 1732253531869, 1737524189842, 1730014485685, 1732253734926, 1730720418934, 1732595651452, 1732706376191, 1730706888695, 1732910843457, 1734795092626, 1732643373109, 1732252721730, 1732252649721, 1732253638929, 1730852328645 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_v7d8" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_cjoF" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_hTSw" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_MUPt" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_hTSw" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_n2JL" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Area_Chair_pNVS" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_cjoF" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Authors" ], [ "ICLR.cc/2025/Conference/Submission12395/Reviewer_MUPt" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a generative model, Nova, tailored for assembly code tasks. Nova employs a hierarchical attention mechanism and is trained using contrastive learning objectives. This paper evaluates its effectiveness on two assembly code tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Strengths:\", \"The topic is interesting and important, addressing large language model (LLM) comprehension of assembly code.\", \"The paper is well-structured and easy to follow.\"], \"weaknesses\": \"Weaknesses:\\n- Comparison may be unfair due to different fine-tuning practices.\\n- Evaluation of individual components is insufficient.\\n- Generalization assessment is lacking.\\n \\n(1) Unfair Comparison: Nova is evaluated on two tasks, with fine-tuning applied specifically for each. However, the baseline models (such as Table 2) do not undergo the same fine-tuning for the tasks, leading to a potentially unfair comparison.\\n \\n(2) Component Evaluation: Nova\\u2019s hierarchical self-attention mechanism consists of three components, yet the paper lacks detailed performance assessments for each part. Despite a reasonable design, their individual impact remains unexamined.\\n \\n(3) Contrastive Learning Objectives: The contrastive learning objectives contain two distinct components. Further evidence is necessary to substantiate the utility of each objective. Additionally, the contrastive learning approach depends on the available optimization levels. Handling unseen optimization levels at inference should be discussed.\\n \\n(4) Normalization Process: In the data collection section, a normalization step is applied, but its relevance or benefit to Nova\\u2019s training is unclear.\\n \\n(5) Results across different optimization levels should be explored\\u2014e.g., training on O0, O1, O2 and testing on O3.\\n \\n(6) Random Sampling in BCSD Task: The BCSD task employs random sampling, yet statistical results are missing. Reporting such results would reduce the impact of randomness on performance claims.\", \"questions\": \"Please check my concerns in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the insightful review and questions.\\n\\n### 1. Clarification\\n\\n* The related work section has been moved to follow the introduction section.\\n* We have highlighted in the data collection section that Nova focuses on C source code and X86-64 Assembly code.\\n* We appreciate the careful review and suggestions. We revised the text accordingly and tried to solve the ambiguity (notations, citations, etc.).\\n\\n### 2. L2 Distance versus Cosince SImilarity\\n\\nThe reason we use L2 distance during training is that LLMs (e.g., DeepSeek-Coder on which we build Nova) hidden states are not normalized. Thus, the embedding we obtained for each source code and assembly function is not normalized. So we use L2 distance instead of adding normalization for simplicity.\\n\\nDuring the evaluation of the BCSD task. We reuse the framework provided in the baseline work CodeArt, which uses cosine similarity to rank the assembly functions. We normalize Nova\\u2019s embedding of assembly functions during the evaluation, and we know **the L2 distance after normalization keeps the same order as cosine similarity (smaller L2 distance means higher cosine similarity)**. So using normalization and cosine similarity during evaluation will not affect Nova\\u2019s performance.\\n\\n### 3. Pass@K\\n\\nPass@K is the most popular metric used to evaluate the correctness of code generation, defined as Equation 1 in the Codex\\u2019s paper [1]. \\n\\nTo be brief, Pass@K measures **how often** the model produces functional correct code. Higher Pass@K means we are more likely to see correct decompilation if we let the model sample K decompilations.\\n\\n### 4. Table 4 Clarification\\n\\nIn Table 4, the \\u201cDeepSeekCoder + Nova\\u2019s Attention\\u201d is essentially Nova-1B, and \\u201cDeepSeekCoder + LongCoder\\u2019s Attention\\u201d is Nova-1B but replacing our hierarchical attention with LongCoder\\u2019s Attention. The rest are the same: both models are trained with the same data, and the same procedure (including CL). We clarify this in the updated version.\\n\\n### 5. Using `objdump -S` or `gcc -S`\\n`gcc -S` is an alternative way of collecting the Assembly data, and Nova can be generalized to the Assembly produced by using `gcc -S`.\\n\\nHowever, a key difference is that the assembly generated by `gcc -S` does not undergo the linking step. In practical scenarios, binary decompilation and analysis are typically performed on executable or linked assembly code, as it includes linker modifications and reflects the final binary structure. \\n\\nBesides, our data collection aligns with many existing baselines, such as LLM4Decompile and jTrans, that collect their assembly code from executable binaries.\\n\\n### 6. Normalization\\n\\n* Existing code LLMs\\u2019 tokenizers do not fit the assembly code very well. For instance: `\\u2019(%rax),%rbx\\u2019` is tokenized to `\\u2018(\\u2018 \\u2018%\\u2019 \\u2018ra\\u2019 \\u2018x\\u2019 \\u2018),\\u2019 \\u2018%\\u2019 \\u2018rb\\u2019 \\u2018x\\u2019`, where `\\u2019),\\u2019` is considered as one token. This does not properly show the semantics, since in X86 Assembly, parentheses are explicitly used to denote memory addressing while commas are used to separate operands. As we cannot re-train the tokenizer, we normalize the assembly code to make the tokenization better reflect assembly code semantics. We add white space to separate `\\u2018(\\u2018`, `\\u2018)\\u2019` and `\\u2018,\\u2019`, so that they are always considered as separate tokens.\\n* We remove all the hexadecimal values as LLMs are very poor at understanding hexadecimal values.\\n* Instead, we use the `[INST-i]` tokens to index the instructions. In decoder-only generative LLM, we want them to summarize the semantics of each instruction and thus have to put them at the end of each instruction since decoder-only LLM only has single-direction attention.\\n\\n### 7. Clarification on Typo in $L_{BCSD}$\\n\\nYes, thank you for pointing out the typo. The numerator $f^q_j$ should be $f^p_j$, $f^p_j$ is the positive candidate in the pool that comes from the same source code function as the query $f^q$. We have fixed it in the updated version.\\n\\n### 8. Clarification on GPT Inference\\n\\nThe GPT models also sample 20 decompilation per Assembly code using the same hyper-parameter (temperature 0.2, top-p 0.95), we rephrase the text to make it clearer.\", \"reference\": \"[1] https://arxiv.org/abs/2107.03374\"}", "{\"comment\": \"We upload a new version of the PDF.\\n\\n1. \\\"Train constraints\\\" is indeed confusing. We revise the text between Line 260--268. We means \\\"train Nova so that its embeddings for functions satisfy the constraints\\\". Line 275--277 explains that in practice, the loss is calculated among each batch of functions (F is a batch of functions, not the entire dataset)\\n2. This is added to the Appendix A.6\\n3. Yes, we run test cases to validate correctness and calculate Pass@K. This is added in line 355--356.\\n4. I think our initial naming of \\\"DeepSeekCoder + Nova's attention\\\" confuses you. The reason you find that the numbers in the second row in Table 4 is the same as Nova-1B in Table 2 is because they are indeed the same model. We are comparing \\\"DeepSeekCoder + LongCoder Attention + CL\\\" with \\\"DeepSeekCoder + Nova Attention + CL\\\" in Table 4. The only difference between the two rows is replacing Nova's attention from Nova-1B with LongCoder's attention.\\n5. This is added to the Appendix in Line 892--899.\\n6. I agree with your point. This design is more by intuition before running experiments, and we do not have enough resources to test on this design choices. Yes, LLMs may be capable to generalize directly.\\nBy removing, I mean removing the address/offset of each instruction, e.g., for instruction \\\"4: push %rbp\\\", \\\"4\\\" is the address/offset. We remove this and use `[INST-i]` to indexing each instructions. For other hexadecimal values used in instructions, we convert them to decimal values.\\n\\nLet me know if you have additional concerns, we still have time before the end of Nov. 27th to update the PDF. Thank you.\"}", "{\"comment\": \"We thank the reviewer for the insightful review and questions.\\n\\n### 1. Fairness of Comparison \\n\\nIn our comparison with other models on decompilation, except for LLM4Decompile and Nova (these two are fine-tuned for this task), the other models are prompted with three examples as few-shot learning. Especially for GPT models, few-shot learning is the common practice of applying them to downstream tasks. We show that Nova-1B outperforms the most powerful models (as these are the most powerful commercial or open-source LLMs at the time) by a significant margin, highlighting the contribution of building such a foundation model for assembly code.\\n\\nBesides, **our main focus is the comparison with LLM4Decompile**. LLM4Decompile is a closely related baseline that also fine-tuned the base backbone (DeepSeek-Coder) using assembly code collected from AnghaBench. Nova-1B significantly outperforms LLM4Decompile showing that it is indeed Nova\\u2019s design (hierarchical attention + contrastive learning) that brings the improvement.\\n\\n### 2. Contribution of Each Component of Hierarchical Attention\\n\\nWe are only able to design the hierarchical attention empirically, as the build of Nova models is expensive, requiring thousands of GPU hours. \\n\\nYet, we compare Nova\\u2019s hierarchical attention design with LongCoder\\u2019s attention, showing that Nova\\u2019s attention design works better on assembly code.\\n\\n### 3. Contribution of Each CL Objective\\n\\nWe conduct additional ablation studies to study the impact of each CL objective (FCL for functional contrastive learning, OCL for optimization contrastive learning). Below is the result:\\n\\nNova$_{-CL-HA}$ means no CL and hierarchical attention (HA), basically standard fine-tuning.\\n\\nNova$_{-FCL-HA}$ means adding OCL.\\n\\nNova$_{-OCL-HA}$ means adding FCL.\\n\\nNova$_{-HA}$ means adding both FCL and OCL.\\n\\n| Model | O0 | O1 | O2 | O3 | Avg |\\n|-|-|-|-|-|-|\\n| Nova$_{-CL-HA}$ | 20.73 | 16.16 | 15.03 | 11.19 | 15.78 |\\n| Nova$_{-FCL-HA}$ | 22.38 | 16.20 | 16.37 | 13.25 | 17.05 |\\n| Nova$_{-OCL-HA}$ | 28.44| 18.87 | 18.53 | 15.76 | 20.40 |\\n| Nova$_{-HA}$ | 30.58 | 19.88 | 20.58 | 16.40 | 21.86 |\\n| Nova-1B | 37.53 | 21.71 | 22.68 | 18.75 | 25.17 |\\n\\nFrom this result, we can see that both FCL and OCL bring improvement. Yet, it is clear that FCL indeed has a bigger impact than OCL. We have added this result to the Appendix in our updated version of the PDF.\\n\\n\\n### 4. Normalization\", \"the_design_of_the_normalization_process_is_based_on_the_following_insights_and_requirements\": \"* Existing code LLMs\\u2019 tokenizers do not fit the assembly code very well. For instance: `\\u2019(%rax),%rbx\\u2019` is tokenized to `\\u2018(\\u2018 \\u2018%\\u2019 \\u2018ra\\u2019 \\u2018x\\u2019 \\u2018),\\u2019 \\u2018%\\u2019 \\u2018rb\\u2019 \\u2018x\\u2019`, where `\\u2019),\\u2019` is considered as one token. This does not properly show the semantics, since in X86 Assembly, parentheses are explicitly used to denote memory addressing while commas are used to separate operands. As we cannot re-train the tokenizer, we normalize the assembly code to make the tokenization better reflect assembly code semantics. We add white space to separate `\\u2018(\\u2018`, `\\u2018)\\u2019` and `\\u2018,\\u2019`, so that they are always considered as separate tokens.\\n\\n* We remove all the hexadecimal values as LLMs are very poor at understanding hexadecimal values.\\n\\n* Instead, we use the `[INST-i]` tokens to index the instructions. In decoder-only generative LLM, we want them to summarize the semantics of each instruction and thus have to put them at the end of each instruction since decoder-only LLM only has single-direction attention.\\n\\n### 5. Train on O0, O1, O2 and Test on O3\\n\\nWe acknowledge this could be a more challenging setting and interesting future work. Yet, most existing baselines (LLM4Decompile for decompilation, jTrans, CodeArt for similarity detection) we compare with use the same setting as us, so we think this research question is out of the scope of this paper.\\n\\n### 6. Randomness of Sampling\\n\\nThe randomness of sampling during generation is a common issue for other tasks such as code generation. We think the metric Pass@K [1] already handles the problem since Pass@K measures the expectation/chance of getting correct decompilation if we let the model generate K times (e.g., Pass@1 means the expectation of having one correct decompilation if the model is generated once, and Pass@10 means the expectation of having one correct decompilation if the model is generated 10 times).\\n\\nIn our evaluation, we let each model sample 20 generations and then calculate Pass@1 (i.e., how many of the 20 generations are correct?) and Pass@10 (i.e., if I pick 10 generations from the 20, what is the chance of having at least one correct decompilation?). We think reporting Pass@K with sampling generation is a common practice and the result is statistical.\\n\\n[1] https://arxiv.org/abs/2107.03374\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper presents Nova, a generative LLM for assembly code. To effectively learn from assembly with low information density, it uses a novel hierarchical attention mechanism which combines intra-instruction attention, preceding-instruction attention and inter-instruction attention. It further utilizes contrastive learning to better learn the semantics of assembly code from different optimization levels. The authors demonstrate the superiority of Nova over the baselines on binary code decompilation and code similarity detection tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. LLMs for binary code is an important topic to study\\n2. This work proposes new methods to train Nova based on the properties of assembly code, which is clearly motivated.\\n3. The proposed models show a clear improvement in binary code decompilation.\", \"weaknesses\": \"1. The comparison on code similarity detection may not be fair. For example, CodeArt uses 12 transformer blocks with 768 hidden dimensions, whose size is smaller than Nova-1B. The authors should compare Nova with the baseline under a similar size with the same pre-training data to demonstrate the superiority of Nova on code similarity detection. For the current result, we can find that compared with CodeArt, Nova actually does not show a significant improvement (e.g. both are 0.64 for Nova-1B under K=500). So it is in question whether Nova is indeed better for code similarity detection.\\n2. The experiments for Comparison with Techniques Handling Long Input are confusing. Specifically, it has the following problems:\\n\\na) What is \\\"Nova\\u2019s Fine-Tuning\\\" in Table 3? It seems Nova does not have something special in terms of fine-tuning. Does it just mean fine-tuning with hierarchical attention or also with Nova's pretraining as suggested in Line 360? \\n\\nb) What is the average token length for downstream tasks before truncation? The authors want to claim Nova is better at solving long input challenges. But I see from the Appendix that Nova uses the input length as 1024 tokens during pre-training and 2048 for fine-tuning. It may be hard to claim this length to be \\\"long-context\\\". Considering that assembly code should be much longer than source code and Granite-3B-Code-128K can handle 128K input tokens at most, have you tested in the benchmarks where the input context is longer, e.g. 8k/32k/128k?\\n\\n3. The presentation of the paper can be improved. Specifically, a) Line 281 is unclear. The authors should clearly state that their pre-training contains two stages and the loss in Line 240 is used in the second stage. b) The ablation study should be separated into new subsections instead of mixing with Section 4.1 c) The equations are not numbered.\", \"questions\": \"1. See weakness 1,2\\n2. Could you provide more details about how to construct $F$ in practice used in Functional CL?\\n3. The authors state that hierarchical attention is only applied to half of the attention head. Since different attention heads can learn different features, I wonder if this setup is robust to the selection of the attention heads?\\n4. Would the pre-trained models (Nova-1B, 6B) be public available?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely thank all the reviewers for their insightful review and questions. We have uploaded an updated version of our paper. The modifications are marked in blue:\", \"We moved the related work section to follow the introduction section to provide the reader with the background earlier.\", \"We revised the data collection section.\", \"We added clarification about the embeddings, and how the embeddings are obtained from the Nova model.\", \"We revised the text about the evaluation of the binary code decompilation task.\", \"We provided the study of the impact of each contrastive learning objectives on the binary cod decompilation task in the Appendix.\"]}", "{\"summary\": \"This paper presents Nova, a generative language model specifically crafted for understanding and generating assembly code. Nova integrates hierarchical attention mechanisms with contrastive learning to effectively capture the semantics of code. The hierarchical attention mechanism focuses on intra-instruction, preceding-instruction, and inter-instruction relations, while contrastive learning ensures that functionally equivalent code, even with different optimizations, is similarly represented. The model is evaluated on two key tasks: decompilation (recovering high-level source code from assembly) and binary code similarity detection (BCSD) (measuring the similarity between binary code functions). Nova shows superior performance in both tasks, excelling in decompilation by accurately generating source code from optimized assembly, and achieving high recall in BCSD by effectively identifying similar code across different optimization levels.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-structured and easy to follow. Concepts such as hierarchical attention and contrastive learning are clearly explained.\\n2. The paper proposes a new method for encoding assembly code by using a Hierarchical Attention Mechanism to effectively capture the semantics of assembly instructions, while employing Contrastive Learning to ensure that functionally equivalent assembly code, even at different optimization levels, is represented similarly. This novel combination allows the model to robustly understand and learn from diverse assembly code structures.\\n3. The paper conducts a broad range of experiments across multiple tasks and datasets, providing comprehensive evidence of the model\\u2019s effectiveness.\\n4. Despite its specialized focus on assembly code, Nova's hierarchical attention is compatible with standard self-attention mechanisms, allowing it to seamlessly integrate and benefit from advancements in base models and code generation models.\", \"weaknesses\": \"1. Unclear motivation for introducing several inductive bias by Hierarchical Attention Mechanism. While the added attention mask inductive bias shows promising results in the BCD task, its impact in the BCSD task is minimal. This discrepancy raises questions about why the inductive bias performs well in one task but fails to offer significant improvements in the other.\\n2. Lack of Design Discussion. The paper lacks sufficient discussion on key design components like Preceding-Instruction Attention and Optimization Contrastive Learning (CL). Without Preceding-Instruction Attention, the attention design is quite similar to CodeART, raising questions about the novelty and contribution of the approach.\", \"questions\": \"1. The paper argues that preceding-instruction attention helps avoid reuse of the same register (e.g., \\\"eax\\\") immediately after it is used in the previous instruction. However, this motivation is questionable because it does not explain how further subsequent instructions are prevented from reusing the same register. A more straightforward solution could be achieved with inter-instruction attention, as it can attend to all previous instructions, which raises the concern of functional overlap between preceding-instruction attention and inter-instruction attention, thus potentially making the preceding-instruction attention redundant.\\n2. While Nova-1B and Nova-6B are much larger than CodeART, their performance gains in BCSD are limited. For example, in the k=100 case, CodeART sees a 17% improvement over JTrans with attention regularization, but Nova's improvement is only marginal, from 0.76 to 0.78 (as shown in Table 12). This suggests that adding hierarchical attention and other inductive biases provides limited benefits when scaling the model, and Tables 11-14 show that removing hierarchical attention does not lead to significant performance drops, questioning its overall necessity. And also in Table 5, the improvement brought by contrastive learning is much higher than the Hierarchical Attention.\\n3. In the paper's analysis of attention distribution (Figure 10), the standard attention frequently converges on the first token, a phenomenon known as attention sink [1]. This behavior is also evident in the analysis of hierarchical attention (Figure 10(c, d)), where each token strongly attends to the first token within its attention mask, specifically the [INST-(x-1)] token, which represents the summary of the previous instruction. But it is not common when human try to interpret the functionality of each individual instruction. Furthermore, the justification for the Hierarchical Attention Mechanism \\u2014which selectively uses specific attention heads to represent the best attention maps\\u2014is somewhat ad hoc and lacks a clearer rationale. \\n4. The Hierarchical Attention Mechanism introduced in this paper represents a strong inductive bias; however, the underlying insights behind this inductive bias are not clearly explained. Additionally, the mechanism bears a striking resemblance to the Attention Regularization used in CodeART, with the primary difference being the absence of Preceding-Instruction Attention in CodeART. The effectiveness of this additional attention component has also been called into question earlier in the reivew, casting some doubt on its true contribution to the overall performance.\\n5. While the use of contrastive learning aligns well with the BCSD task\\u2014improving performance by ensuring that functionally similar binaries, even across different optimizations, are represented similarly\\u2014it's less clear how this objective enhances the model's ability in decompliation. The training goal focuses on increasing the similarity of tokens from the same function but compiled with different optimization settings. However, this doesn't seem directly aligned with the ultimate goal of recovering executable source code, which requires more precise structural and semantic understanding beyond just token similarity across optimization levels. It would be greatly appreciated if the authors could provide some intuition as to why this approach can lead to improvements in decompliation.\\n6. The authors introduced a novel optimization contrastive learning approach for the BCSD task, which had not been previously applied in the previous works, which commonly use the InfoNCE loss (line 220) or the triplet loss. As it is not discussed with deeper detail in the paper, it raises the question of whether these gains are substantial enough to justify the added complexity and whether this approach could be effectively generalized to improve other models in BCSD tasks.\\n\\n[1]: Efficient Streaming Language Models with Attention Sinks\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. Clarification. Thanks for the updates to the paper, I believe it's a good improvement to clarity. However, some of my points in \\\"Clarity 4.\\\" still stand (e.g., one does not train or optimize a constraint).\\n2. If you're re-using an existing framework and methodology, I think it should be explicitly mentioned. The information about normalization at evaluation time, and your point about order preserving, should also be present (maybe in the appendix).\\n3. Here as well, using the unbiased estimator of Codex for computing pass@k should be mentioned in the paper. But my question was more about what was considered \\\"correct\\\": are you relying on correct execution of unit tests, as in the Codex paper?\\n4. Thanks for updating the table and text. But wouldn't \\\"DeepSeekCoder + Nova's attention\\\" be closer to \\\"Nova_{-CL}\\\"? Isn't the contrastive learning important?\\n5. That makes sense, but maybe mention it in the appendix as well.\\n6. Normalization\\n * I understand the intuition, that makes sense. The LLMs may be capable to generalize a bit and understand that some tokens represent semantics from two successive things, but there's no real harm in making it explicit.\\n * By \\\"remove\\\" here, do you mean \\\"convert to decimal\\\" as in l. 167? it seems that way in Figure 6.\\n7. and 8. OK, thank you\"}", "{\"comment\": \"Thank you for the response. Here are some comments about your response.\\n\\n1. Design Choice of Preceding-Instruction Attention\\n\\nThe authors mention that \\\"inter-instruction attention is more about higher-level semantics.\\\" However, as evidenced by Figure 6 in CodeArt, \\\"token level inter-instruction\\\" fine-grained attention is not necessarily required to achieve excellent performance.\\n\\nAdditionally, this mechanism could be perceived as a sliding window with window size equals two at the instruction level. Why not extend the sliding window to the basic block level instead?\\n\\nThe authors also state that \\\"The preceding-instruction attention enables every token in an instruction to capture dependencies with previous instructions.\\\" However, this capability is also present in inter-instruction attention, where [INST-i] can attend to [INST-(i-1)], and through positional embeddings, the model can discern that this token is the preceding one.\\n\\n2. Comparison of Performance on BCSD\\n\\nThe authors claim that \\\"simply training DeepSeek-Coder-1B cannot outperform existing SOTAs on BCSD,\\\" which is an unfair comparison. In all attempts to use decoders as embedding models, as shown in [1], it is not common practice to use the base model's results directly for downstream tasks. Instead, with the incorporation of contrastive learning, a widely used embedding training method, $Nova_{\\u2212HA}$ achieves significantly high results.\\n\\nFrom the experiments in the appendix regarding the impact of each contrastive learning objective, it is evident that FCL has an effect, but it is not as significant as OCL, which is a common training method for BCSD. The applicability of FCL should also be considered, as monotonicity may not always be present under other compiler optimizations such as `-Os`, `-O3`, `-Ofast`. These optimization levels are not always directly comparable, affecting the generalizability of this training loss.\\n\\n3. Distinction Between Nova's Hierarchical Attention and CodeArt's Attention Regularization\\n\\nThe authors highlight that \\\"one fundamental difference between Nova's and CodeArt's attentions\\\" is that CodeArt uses bidirectional attention, whereas Nova employs unidirectional attention. This is not the core distinction. CodeArt's paper does not specify that it is limited to bidirectional attention. Current models do not handle the common jmp instruction in assembly code, including Nova. The authors did not address how Preceding-Instruction Attention is managed with jmp instructions. Consider the following assembly example:\\n\\n```\\n jmp .label2\\n.label:\\n\\u00a0 mov rax, rbx\\n\\u00a0 cmp rax, 0\\n\\u00a0 je .label\\n.label2:\\n \\u00a0ret\\n```\\n\\nIn this example, the preceding instruction for `mov rax, rbx` should be `je .label`, but with unidirectional attention, it is impossible to add Preceding-Instruction Attention from `mov rax, rbx` to `je .label`. However, Nova maintains good performance in programs with numerous control flow jumps, indicating that unidirectional attention does not hinder understanding. Therefore, the core difference between CodeArt and Nova is not the attention mechanism's directionality.\\n\\nAdditionally, a simple search reveals that mixing unidirectional and bidirectional attention in LLMs is feasible [2][3].\\n\\n4. Generalizability\\n\\nIt is worth noting that the original review did not raise questions about generalizability. However, the authors addressed this in their rebuttal.\\n\\n5. Overall\\n\\nConsidering the aforementioned points, the authors have not clearly demonstrated the innovation of the attention mechanism in the paper. The role of Preceding-Instruction Attention and its distinction from CodeArt are not well-articulated. Nevertheless, given the promising results Nova achieves in improving binary code decompilation, I am open to reconsidering the score.\\n\\n[1] Improving Text Embeddings with Large Language Models (http://arxiv.org/abs/2401.00368)\\n\\n[2] Bitune: Bidirectional Instruction-Tuning (https://arxiv.org/abs/2405.14862v1)\\n\\n[3] Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model (https://arxiv.org/abs/2408.11039)\"}", "{\"summary\": \"This paper presents Nova, a generative language model specifically designed for assembly code, addressing unique challenges posed by the low information density and diversity in assembly syntax due to compiler optimizations. Nova introduces a hierarchical attention mechanism and employs contrastive learning to improve the model's understanding of assembly semantics across diverse optimization levels. Trained on a large assembly corpus, Nova outperforms existing techniques in tasks like binary code decompilation and binary code similarity detection, showing improvements in Pass@1 and Recall@1 rates over state-of-the-art models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Clear Writing and Novel Application: The paper is well-written and easy to follow. The idea of applying hierarchical attention to assembly code is interesting and novel. While hierarchical attention is commonly used in NLP tasks, applying this mechanism to assembly code is, to the best of my knowledge, unprecedented.\\n\\n2. Promising Results: The evaluation results are promising. Nova demonstrates substantial improvements in both decompilation accuracy and similarity detection compared to existing models, validating its approach with strong experimental evidence.\", \"weaknesses\": \"Generalizability: The model is trained exclusively on x86 assembly code, which may limit its generalizability to other assembly languages, such as ARM or MIPS.\", \"realism_of_evaluation_settings\": \"(1) The decompilation prompt requires optimization level information, but it is unclear if this information is accessible in stripped binaries.\\n\\n(2) For baseline models like GPT, fine-tuning with additional data isn\\u2019t necessary, raising questions about the fairness of the comparison. If GPT were given a few-shot learning setup or fine-tuned using OpenAI\\u2019s API, could it still be outperformed by the proposed approach?\", \"related_work\": \"The paper omits discussion of several relevant works, which could provide a broader context for its contributions.\\n\\n[1] Debin: Predicting Debug Information in Stripped Binaries. CCS 2018\\n\\n[2] {DIRE}: A Neural Approach to Decompiled Identifier Renaming. ASE 2019\\n\\n[3] Learning to Reverse DNNs from AI Programs Automatically. IJCAI 2022\\n\\n[4] Asm2Vec: Boosting Static Representation Robustness for Binary Clone Search against Code Obfuscation and Compiler Optimization. S&P 2019\\n\\n[5] Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection. CCS 2017.\\n\\n[6] ecompiling x86 Deep Neural Network Executables. Security 2023.\", \"questions\": \"For binary similarity detection, compilers may inline functions or eliminate them altogether. How does your approach handle such scenarios?\\n\\nIf additional information (e.g., execution traces) were provided to GPT, or if iterative interaction with GPT were allowed, could the proposed approach still outperform a GPT-based model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the follow up. We generally agree with the reviewers, but we want to clarify a few disagreements.\\n\\n### **1 & 3: CodeArt\\u2019s Limitation on Unidirectional Attention, which Motivates Preceding-Instruction Attention**\\n\\nCodeArt\\u2019s attention is indeed limited for the unidirectional attention (decoder-only) model. The reviewer says: \\u201ctoken level inter-instruction\\\" fine-grained attention is not necessarily based on CodeArt\\u2019s Figure 6\\u201d, which seems not entirely true given their use of the `<cls>` token.\\n\\nConsidering two instructions, CodeArt puts `<inst>` at the beginning of every instruction and puts a `<cls>` token at the beginning of the entire binary program:\\n```\\n<cls>\\n<inst> mov ( rdi ) , ecx\\n<inst> mov ( rsi ) , edx\\n```\\n\\nIn CodeArt\\u2019s design, the `<inst>` tokens have access to all the other `<inst>` tokens, so they are responsible for cross-instruction dependencies, our inter-instruction attention is similar to this.\\n\\nBut, the other tokens such as `mov`, `rsi` do not have access to other instructions, so the embeddings of these tokens inside each instruction may not consider the context. To address this, CodeArt gives every token inside the instruction access to the `<cls>`, and `<cls>` has access to all tokens in the binary program. Thus the `rsi` in the 2nd instruction can capture the dependencies with `rdi` in the 1st instruction, through `rsi -> <cls> -> rdi`, and vice versa. The CodeArt authors call it \\u201cglobal context\\u201d (Figure 6b in CodeArt paper), where each token does have dependencies with every other token through this `<cls>` token. \\n\\n**However, this design is not applicable to the decoder-only models**, because in decoder-only models, there cannot be such a `<cls>` token as a \\u201chub\\u201d to transfer attention among tokens in different instructions:\\n* if we put <cls> at the beginning, the <cls> token won\\u2019t have access to any tokens.\\n* if we put <cls> at the end, the other tokens won\\u2019t have access to it.\\n* if we add <cls> in the middle, the <cls> won\\u2019t have access to tokens after it, and the tokens before <cls> won\\u2019t have access to <cls>, still lack of circular attention transfer. \\n\\nCodeArt\\u2019s design actually shows we do want tokens inside instructions to have some attention to the context, although the intra-instruction attention within each individual instruction should be dominant. This benefits the hidden states of the non-<inst> tokens. \\n\\n**Nova achieves this with preceding-instruction attention**. See the same example binary but in Nova\\u2019s format:\\n\\n```\\nmov ( rdi ) , ecx [inst-1]\\nmov ( rsi ) , edx [inst-2]\\n```\\n\\nThe `rsi` could have access to `rdi` through `rsi -> [inst-1] -> rdi`. `rdi` cannot access `rsi` anyway due to the limitation of unidirectional attention.\\n\\nThe reviewer proposes using window attention with a window size equal to two instructions, which indeed could be a promising design. But this is conceptually similar to our preceding-instruction attention together with the intra-instruction attention.\\n\\nWe also agree that other mixed designs could be a choice. We would like to discuss them in the related work sections in the final version. However, we cannot explore these solutions in the scope of this work.\\n\\n### **2: A Small Misunderstanding of Table 11**\\n\\nIn Table 11, `Nova\\u2212OCL\\u2212HA` (the better one) is without OCL so it is actually using FCL. We did \\u201ccomponents removal\\u201d in the ablation study design. Our conclusion is that FCL has a bigger impact than OCL.\\n\\nFCL aligns the functionality. Given the same functionality, no matter with `O0-O3, Os, Ofast` optimization, the assembly should all be grouped together. This is a common method for BSCD, and this is generalizable to different optimizations.\\n\\nOCL considers the optimization monotonicity, which has less impact and is less generalizable. But Table 11 still shows that adding OCL leads to better results, especially with O0-O3 binaries. So one should not ignore the contribution of OCL during training.\\n\\nExcept for this small misunderstanding, we agree with the reviewer's opinion on the BSCD results.\\n\\nWe appreciate the reviewer\\u2019s acknowledgment of Nova\\u2019s promising results on the decompilation task.\"}", "{\"metareview\": \"This paper studies using LLMs for binary code analysis. Despite many works used LLMs on source code tasks, they cannot be directly generalized to assembly code. The authors proposed an attention mechanism to capture the semantics in binary code for LLM training. Their method can outperform latest binary code similarity detection.\\n\\nAfter rebuttal, this paper received diverse scores including three positive scores and two negative ones. I read all comments and rebuttals, especially Reviewer hTSw who gave a score of 3. Despite there is no reply from Reviewer hTSw, I think almost all questions are addressed. I think this paper can be accepted.\", \"additional_comments_on_reviewer_discussion\": \"There are five reviews on this paper but only one reviewer participate the rebuttal (simple reply without any real discussion). After trying to ping them, I need to decide whether the questions are solved in rebuttal especially for those two negative reviews. The reviewer hTSw gave a lot of detailed questions of this paper but did not propose any key issues that can directly lead to a reject. The rebuttal is clear and all questions are answered. So even without confirmation from reviewers, I feel this paper should not be rejected.\"}", "{\"comment\": \"Thanks for the authors' response. I think it basically clarifies my questions.\"}", "{\"comment\": \"We thank the reviewer for the insightful review and questions.\\n\\n### 1. Optimization Level During Inference\\n\\nWe conduct an evaluation of decompilation where the optimization level is not provided to the model. Fortunately, Nova is able to keep the highest performance without the optimization information.\\n\\n| Model | O0 | O1 | O2 | O3 | Avg |\\n|-|-|-|-|-|-|\\n| Nova-1B without optimization info. | 37.23 | 20.98 | 22.42 | 19.05 | 24.92 |\\n| Nova-1B | 37.53 | 21.71 | 22.68 | 18.75 | 25.17 |\\n\\nWithout the optimization information, the averaged Pass@1 of Nova-1B drops slightly from 25.17 to 24.92, yet it still significantly outperforms other baselines. We also find that without optimization information, Nova-1B actually performs slightly better on decompiling O3 assembly, which could be the variance of experiments.\\n\\nWe think one possible reason why Nova is able to handle decompilation without optimization information, is that during the pre-training (language modeling, contrastive learning) stage, the assembly code is input without any optimization information, Nova is trained to predict assembly instructions and encoding assembly code without knowing the optimization level. This training enables Nova to perform well on decompilation without knowing the optimization information.\\n\\n### 2. GPT Baseline\\n\\n* We want to clarify that during the inference of binary code decompilation, except for LLM4Decompile and Nova (these two are fine-tuned for the task), we provide the other baseline models with three examples for few-shot learning. Thus, the GPT models have three-shot examples.\\n\\n* We conduct a one-round self-debug experiment using GPT-4o, letting GPT-4o revise the decompilation if it fails the test cases. The Pass@1 result is shown below:\\n\\n| Model | O0 | O1 | O2 | O3 | Avg |\\n|-|-|-|-|-|-|\\n| GPT-4o | 21.34 | 18.29 | 14.48 | 13.05 | 16.79 |\\n| GPT-4o + Self-Debug | 25.46 | 21.08 | 16.73 | 14.19 | 19.37 |\\n| Nova-1B | 37.53 | 21.71 | 22.68 | 18.75 | 25.17 |\\n\\nAlthough GPT-4o is able to revise some incorrect decompilation if we provide the execution feedback to it, the Pass@1 after such revision is still much worse than Nova-1B which is designed and fine-tuned for assembly code.\\n\\n### 3. Related Works\\n\\nWe thank the reviewer for pointing out additional related work, and we have added them to the \\\"Binary Models\\\" section in the related work part in our updated version of the PDF.\"}", "{\"comment\": \"We thank the reviewer for the insightful review and questions.\\n\\n### 1. Design Choice of Preceding-Instruction Attention\", \"there_are_two_reasons_for_using_preceding_instruction_attention\": \"* Intuitively, we separate the preceding-instruction attention from the inter-instruction attention, since inter-instruction attention is more about higher-level semantics. For the example in Figure 1 (a) and (b). The five instructions from 10 to 1c are corresponding to the if-condition in the source code. Such functional semantics are different from the more fine-grained dependencies in the compiler\\u2019s view (allocation of registers, maintaining of stack).\\n* Another reason is that Nova is designed for decoder-only generative LLM, where the attention is single-directional. The preceding-instruction attention enables every token in an instruction capture the dependencies with previous instructions. While the inter-instruction attention is only among the `[INST-i]` tokens (due to the design that we want this to capture functional semantics across multiple instructions), and thus the other tokens in the instruction do not have access to the inter-instruction attention.\\n\\n### 2. Improvement of BCSD\\n\\nWe agree the improvement on the binary code similarity task is less significant than that on the binary code decompilation task. Yet, binary code similarity is a much better-explored domain than binary code decompilation. There has been more related work from software engineering, and security domain design models for it. The baselines we compare (jTrans, DiEmph, and CodeArt) have already achieved impressive performance. By contrast, end-to-end binary code decompilation is a very new task and LLM4Decompile is the only existing SOTA baseline we can find. Thus, we think there is less space to improve on binary code similarity than binary code decompilation, and that Nova outperforming existing SOTAs is valuable and hard.\\n\\nBesides, those specialized BCSD models we compare with are encoder transformers. The bidirectional attention benefits their performance on the BCSD task as it is essentially an encoding task. Using decoder-only LLM for encoding tasks such as the BCSD task is less common and harder. Our ablation study (in Appendix Table 12-15) also shows that without Nova\\u2019s designs, simply training DeepSeek-Coder-1B cannot outperform existing SOTAs on BCSD.\\n\\nThen why do we explore Nova (a decoder-only LLM) on BCSD? Given that decoder-only LLM is getting much more popular and promising, and the latest advanced LLMs are almost all decoder-only (Llama, StarCoder, etc), Nova shows the potential of building on top of the latest decoder-only LLMs for encoding tasks such as BCSD so that we can make good use of those pre-trained decoder-only LLMs.\\n\\n### 3. Difference Between Nova\\u2019s Hierarchical Attention and CodeArt\\u2019s Attention Regularization\\n\\nThere is one fundamental difference between Nova's and CodeArt\\u2019s attentions. CodeArt is designed for Encoder LLMs (e.g., BERT) with bidirectional attention, while Nova is designed for Decoder-only generative LLMs with single-directional attention. This design choice is that decoder-only LLM shows more promise in generation tasks, and the most advanced LLMs are typically decoder-only.\\n\\nThus, Nova\\u2019s attention has to use the proceeding-instruction attention to let every token in an instruction capture dependencies with previous instructions.\\n\\nIn addition, Nova\\u2019s design is compatible with the standard self-attention used for source code (Figure 3 (c)), as we want to build Nova\\u2019s approach as a plug-in that can work with any transformer-based, decoder-only LLM to make use of their pre-training knowledge. While CodeArt\\u2019s design only works for assembly code and cannot handle input mixes assembly and source code or natural language.\\n\\n### 4. How Contrastive Learning Improves Decompilation\\n\\nOur insight is that contrastive learning enables the model to produce similar embeddings (i.e., the hidden states from the last transformer layer for `[INST-i]` tokens) for assembly code from the same source code. As LLMs understand the O0 assembly code the best (O0 assembly is decompiled with the highest pass@k), such aligning enables the LLMs to link the O1-O3 assembly with the O0 assembly and potentially learn the O1-O3 assembly better.\\n\\nBesides, during the decompilation, the LLM generates the decompilation given the assembly code as input, i.e., the generation of decompiled source code is conditioned on the hidden states/embeddings of the input assembly code. Thus, we think better embeddings can lead to better decompilation.\\n\\n### 5. Generalizability\\n\\nNova\\u2019s approach is generalizable to different assembly languages such as ARM or MIPS, since they share a similar challenge as X86. Nova\\u2019s approach can also be used on any transformer-based, decoder-only generative LLM. \\n\\n[1] https://aclanthology.org/2023.acl-long.355/\"}", "{\"comment\": \"We thank the reviewer for the insightful review and questions.\\n\\n### 1. Improvement on BCSD\\n\\nWe acknowledge that Nova\\u2019s improvement on BCSD is not as significant as on BCD, especially given that Nova has a larger model size. \\n\\nYet, binary code similarity is a much better-explored domain than binary code decompilation. There has been more related work from software engineering, and security domain design models for it. The baselines we compare (jTrans, DiEmph, and CodeArt) have already achieved impressive performance. By contrast, end-to-end binary code decompilation is a very new task and LLM4Decompile is the only existing SOTA baseline we can find. Thus, we think there is less space to improve on binary code similarity than binary code decompilation, and that Nova outperforming existing SOTAs is valuable and hard.\\n\\nBesides, those specialized BCSD models we compare with are encoder transformers. The bidirectional attention benefits their performance on the BCSD task as it is essentially an encoding task. Using decoder-only LLM for encoding tasks such as the BCSD task is less common and harder. Our ablation study (in Appendix Table 12-15) also shows that without Nova\\u2019s designs, simply training DeepSeek-Coder-1B cannot outperform existing SOTAs on BCSD.\\n\\nThen why do we explore Nova (a decoder-only LLM) on BCSD? Given that decoder-only LLM is getting much more popular and promising, and the latest advanced LLMs are almost all decoder-only (Llama, StarCoder, etc), Nova shows the potential of building on top of the latest decoder-only LLMs for encoding tasks such as BCSD so that we can make good use of those pre-trained decoder-only LLMs.\\n\\n\\n### 2. Table 3\\n\\nIn Table 3, Nova\\u2019s fine-tuning means applying hierarchical attention and using contrastive learning objectives. We want to show that, our approach brings improvement over standard fine-tuning on multiple backbones (both DeepSeek-Coder 1.3B and 6,7B, as well as Granite-3B-Code).\\n\\nMaybe it is clearer to call it \\u201cGranite + Nova\\u2019s Approaches\\u201d.\\n\\n### 3. Token Length\\n\\nIn the HumanEval-Decompile benchmark, the longest assembly function has 3240 tokens, and the average is 698 tokens. We do not truncate during inference and we truncate during training mainly for speeding up and saving GPU memory. \\n\\nOur point is not purely for longer input, since the challenge of assembly code is longer input + low information density. The model has to summarize the tokens at a higher level to extract functional semantics.\\n\\n\\n### 4. Details of Functional CL\\n\\nThe F in the Functional CL (L 264) is a set of functions in the training data. For example, in practice, we have a batch size of 64 during the training with Functional CL, then F is a set of 64 unique functions, and each function has a source code, O0, O1, O2, and O3 assembly format.\\n\\nThen we can calculate the loss $L_{fcl}$ (L 274) among this batch and perform backward propagation. \\n\\n\\n### 5. Apply of Hierarchical Attention\\n\\nIn Appendix Table 10, we show that without hierarchical attention or applying hierarchical attention on all the attention heads are both not good. That\\u2019s why we choose to apply on half of the attention heads. \\n\\nAlthough a more fine-grained study could be conducted (e.g., applying on a quarter of heads and so on), given that each training of Nova requires nearly a thousand GPU hours, we are unable to thoroughly search the design choices.\\n\\nIntuitively, this is a trade-off between the effectiveness of hierarchical attention and the pre-training knowledge that is kept in the original attention heads.\\n\\n### 6. Availability\\n\\nYes, we plan to release the model weights, and code (for hierarchical attention implementation) after the notification.\"}", "{\"summary\": \"The paper presents a way of training an LLM to improve its performance on tasks that require understanding of assembly code, in particular code decompilation, and assembly code similarity detection.\", \"this_is_achieved_by_several_contributions\": \"1. A multi-way, parallel corpus of programs written in C, as well as the corresponding assembly produced by `gcc` with different levels of optimization (0 to 3), used for further training of pre-trained LLMs.\\n2. A hierarchical attention mechanism, structured to summarize the content of each instruction into the representation of a single token. This mechanism is compatible with existing models.\\n3. Two auxiliary contrastive loss objectives: a \\\"functionality\\\" one that minimizes the distance between representations of the same original code, while maximizing the distance between representations of different code pieces, and an \\\"optimization\\\" one encoding the fact that further levels of optimization should increase the distance between program representations.\\n\\nTwo variants (with 1B and 6B parameters respectively) of a model trained with these changes, and further fine-tuned for the task of interest, show a large improvement over state-of-the-art.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Originality\\n--------------\\n1. While hierarchical attention mechanisms are not new, the design of this one is innovative in that: it takes into account the specific format and constraints of assembly instructions, and it accommodates for using regular tokens in the same sequence (e.g., natural text instructions).\\n2. The contrastive objective losses, as well, encode a priori knowledge of the underlying data: compilation stages preserve semantics, and optimization stages are sequential.\\n\\nQuality\\n----------\\nThe different contributions are overall sensible, and contribute to the performance of the model. Experiments are adequately designed, and support the conclusions of the paper. The additional experiments help understand the role of the different contributions, in particular their effect on how embeddings get clustered and the effect it can have on the final model's performance.\\n\\nClarity\\n---------\\n1. The paper includes most of the relevant information, either in the main text or appendix. Relevant literature is mentioned and cited.\\n2. Figures and examples make it much easier to understand the logic, especially Fig. 3.\\n\\nSignificance\\n-----------------\\n1. This work shows a significant improvement on benchmarks, sustained across model sizes, and adaptable to other models. This is an advancement on an important, developing field of machine learning applications.\\n2. Given that these improvements do not require any in-depth change (e.g., to the vocabulary) and are compatible with already pre-trained model make it easier to experiment with in different settings.\", \"weaknesses\": \"Quality\\n----------\\n1. One of the 3 motivating cases in the introduction, malware detection, is not evaluated or considered at all in the rest of the paper. I understand the scope of the paper needs to end somewhere, but it would have strengthened the paper to include experiments on such a dataset.\\n2. Details are missing in how the authors are certain that test data sets (both for decompilation and for similarity detection) do not overlap with any of the training data, including the pre-training data of DeepSeek-Coder, even inadvertently.\\n3. An additional ablation on $\\\\textrm{Nova}_{-CL}$ would have helped see if there are any non-linear interactions between HA and CL.\\n\\nClarity\\n---------\\nThe overall organization of the paper could be improved. Many times, a concept, method, or setting is used in context before being formally explained. For instance:\\n1. If the \\\"Related Work\\\" section is positioned earlier, it would help introduce the baseline models (DeepSeekCoder, LLM4Decompile) that are used in the previous \\\"Results\\\" section, as well as attention mechanisms, including LongCoder's, also used earlier.\\n2. When describing the new datasets, it should be clear much earlier that \\\"source code\\\" really means \\\"C code\\\" (in the caption of Table 1, for instance), \\\"assembly\\\" is X86 assembly (or maybe X86-64? that's not so clear), that only `gcc` is considered as a compiler, and whether each \\\"program\\\" actually means a full executable program, or if it includes functions as well.\\n3. Similarly, the contrastive losses mention \\\"the embedding\\\" of a function, which is quite ambiguous in transformers, especially if the model family (encoder-decoder?) is not mentioned.\\n4. There is also a lot of ambiguity in notation, or the semantics of different objects. For instance:\\n * Do Table 1, and Appendix A.2, refer to the original \\\"AnghaBench\\\" and \\\"The-Stack\\\" datasets, or the new datasets constructed by the authors in Section 2.1? Maybe it would be better to name the new ones.\\n * In Functionality CL, l. 208 says it \\\"optimizes Nova with the constraint\\\", but a constraint is not a loss or objective. l. 215, \\\"constraints can be trained\\\" do not really make sense. It's also not obvious how the loss defined at l. 220 actually implements (a relaxation of) these constraints. It's also not explained if the sum over $f_i \\\\in F$ is actually done over all the million embeddings in the corpus, or how it's implemented in practice.\\n * K is introduced in Section 2.5, and then in 3.3., but we don't know what kinds of values will be used in practice. Also, Table 2 uses \\\"Pass@K\\\", but that's not the same K.\\n * In captions of Fig. 4 (b) and (d), the tables are more \\\"designs\\\" than \\\"implementations\\\"\\n * In Fig. 4 (b), the 1-4 indices are unfortunate as, for instance, $O0_3$ reads a lot like `-oO3`\\n * The equations at l. 220 and l. 266 have a really similar form, but the use of indices $i$ and $j$ is swapped between the two, making it a bit harder\\n\\nSignificance\\n-----------------\\nThe results are somewhat limited by the use of a single assembly language, and a single compiler, but this is acknowledged and does not seem like a fundamental limitation.\\n\\nMinor points\\n-----------------\\nl. 461: \\\"cauclated\\\" -> \\\"calculated\\\"?\", \"in_the_bibliography\": [\"Vaswani et al. is actually from 2017, not 2023 (though the arXiv version has had an inconsequential update in 2023), and a venue should be indicated (I'd suggest NeurIPS rather than arXiv)\", \"Other articles are missing a venue or source\", \"Several articles have incorrect capitalization in the title due to the lack of curly braces, e.g., use `{CodeT5}` to avoid it being rendered as \\\"Codet5\\\".\"], \"questions\": \"1. Why do the evaluation for code similarity detection use cosine similarity (l. 321) when the objective (l. 212) uses the l2 distance?\\n2. What is the underlying metric for the Pass@k in the decompilation evaluation? Exact match, or some more lenient equivalent? It seems wrong to use exact match when, for instance, variable names would be arbitrary.\\n3. In Table 4, the second row is exactly the \\\"Nova-1B\\\" row of Table 2, but I was under the impression that \\\"Nova-1B\\\" was more than just \\\"DeepSeekCoder + Nova's attention\\\", in particular the additional training data, and CL objective. Are the numbers off, or the caption, or did I miss something?\\n4. When creating the assembly datasets (Appendix A.1), why go all the way to compiling executables, then using `objdump` for disassembling, with the associated possibilities of failure, rather than dump the assembly in the first place with `gcc -S`?\\n5. Do you have preliminary results, citations, or intuition behind the \\\"normalizing\\\" step of the assembly language performed in Fig. 6, in particular the addition of spaces? Is that necessary?\", \"minor_points\": \"1. In the numerator on l. 265, is $f_j^q$ supposed to be $f^q$? or $f_j^p$ for which the substitution wouldn't apply?\\n2. l. 300, how many samples do the GPT models perform, then, to be able to compute the Pass@10 in Table 2?\\n\\nEdit after discussion and update\\n--------------------------------------------\\nThe overall clarity has improved, and additional information was provided.\\nMost questions have been answered, so I'm raising my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4ytHislqDS
IFORMER: INTEGRATING CONVNET AND TRANSFORMER FOR MOBILE APPLICATION
[ "Chuanyang Zheng" ]
We present a new family of mobile hybrid vision networks, called iFormer, with a focus on optimizing latency and accuracy on mobile applications. iFormer effectively integrates the fast local representation capacity of convolution with the efficient global modeling ability of self-attention. The local interactions are derived from transforming a standard convolutional network, \textit{i.e.}, ConvNeXt, to design a more lightweight mobile network. Our newly introduced mobile modulation attention removes memory-intensive operations in MHA and employs an efficient modulation mechanism to boost dynamic global representational capacity. We conduct comprehensive experiments demonstrating that iFormer outperforms existing lightweight networks across various tasks. Notably, iFormer achieves an impressive Top-1 accuracy of 80.4% on ImageNet-1k with a latency of only 1.10 ms on an iPhone 13, surpassing the recently proposed MobileNetV4 under similar latency constraints. Additionally, our method shows significant improvements in downstream tasks, including COCO object detection, instance segmentation, and ADE20k semantic segmentation, while still maintaining low latency on mobile devices for high-resolution inputs in these scenarios. Code and models are available at: https://github.com/ChuanyangZheng/iFormer.
[ "Lightweight Networks", "Efficient Networks", "Vision Transformers", "Classification" ]
Accept (Poster)
https://openreview.net/pdf?id=4ytHislqDS
https://openreview.net/forum?id=4ytHislqDS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xKTRM0gIwM", "rqKqzLrK4H", "qIHY6N7cPN", "nfK6rbOgir", "nIaqVAmf5F", "jaczOx0wZl", "fV716kKFMA", "eMeJsr0hNv", "Odo5XI2eDw", "LHHtk41Zvn", "IrgP76JIGt", "HSLw5oSJlJ", "CjlXIoXEWX", "C0F8M6hDR8", "7EnCO1h5f4", "5jThVgKbqr", "4TLkFvseTg", "4RtV3zMOsB", "3oG64OOpSU", "3XF4QfKbj7" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732635841194, 1731947671124, 1732396325745, 1730157474640, 1731947773825, 1731942444262, 1730334401368, 1732626150040, 1732696657469, 1731943083389, 1734226388988, 1731949853293, 1730833935283, 1731942875906, 1730857124048, 1732279069493, 1737523424168, 1731942285978, 1730870704885, 1731947552049 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission938/Reviewer_fhTG" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_aPLf" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_aPLf" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_fhTG" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_waKF" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_XLkf" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Area_Chair_7WTc" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_waKF" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_XLkf" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ], [ "ICLR.cc/2025/Conference/Submission938/Reviewer_wKqn" ], [ "ICLR.cc/2025/Conference/Submission938/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your response. I will maintain my score.\"}", "{\"title\": \"Response to Reviewer wKqn (part 2/2)\", \"comment\": \"W2, W3:\\n\\nWe are sorry for any confusion caused by the unclear statement. We would like to clarify that our SHA has no relationship to SHViT. We have included more details about SHA in the Section D of the revised version (highlighted in blue). Additionally, we will release the source code in the final version.\\n> According to the citation of SHViT in this paper, I suppose ... in SHViT design.\\n\\nThe SHA in iFormer is different from SHViT in the following three key aspects. First, in terms of motivation, iFormer explores efficient attention mechanisms in an on-device environment, while SHViT focuses on general-purpose GPUs, which may exhibit distinct characteristics.\\n> full channels of input (CxHxW) are ... single head.\\n\\nSecond, in terms of methodology, the full channels of the input (CxHxW) are projected to Q/K (C/R X L) and V (C x L), where R is the head reduction ratio and is set to 2 in our models. We have updated the Figure 4 of the revision to reflect this. In contrast, SHViT splits the input and utilizes fewer than 1/4 of the channels for attention. This reduction in channels can lower the rank of the attention matrix, thereby degrading its representational capacity. Furthermore, we do not use split and concatenation operations as they tend to exacerbate memory access costs.\\n> I suggest the authors conduct an ablation study ... the proposed method.\\n\\nTo show this, we refer to the SHA baseline in Table 1, i,e., 'SHA' in Figure 2. Subsequently, we transit it toward SHViT as follows:\\n\\n| Model | Params | GMACs | Latency | Top-1 |\\n|:----------------------------|:------:|------:|--------:|------:|\\n| SHA Baseline without Modulation | 9.9M | 1758M | 1.12ms | 79.8 |\\n| + split | 9.9M | 1758M | 1.18ms | - |\\n| + attention on 1/4 channels | 8.3M | 1547M | 1.02ms | - |\\n| + concat | 8.7M | 1579M | 1.11ms | 79.5 |\\n\\nIt can be observed that split and concat operations introduce additional runtime. Furthermore, the performance of the SHA in the SHViT exhibits a decline compared to its counterpart in iFormer under similar latency conditions (79.8 v.s. 79.5). This degraded performance may be attributed to the reduced number of channels in the attention mechanism. We have updated this information in the Section D of revision.\", \"q2\": \"Since we do not apply SHViT, the only difference between the MHA Baseline and the SHA Baseline in Table 1 is the number of heads, which pertains to reshaping. The performance and latency of the MHA serve as the missing step between 'kernel sz.' and 'SHA', represented by the MHA Baseline.\"}", "{\"comment\": \"Thank you for you response. I will keep my score.\"}", "{\"summary\": \"Summary\\n\\nThis paper proposes a mobile friendly vision network that improves the latency and accuracy by combining the strengths of both CNNs and ViTs. The novel aspect of this work is the single head modulation self-attention (SHMA). This SHMA learns spatial context through optimized self-attention. It takes ConvNext as the base model and improves it further with various techniques. The authors streamline the ConvNeXt architecture, making it suitable for real-time use on mobile devices, such as the iPhone 13, focusing on reducing latency rather than FLOPs or parameter count. The combined techniques lead to more than 80% top-1 accuracy with 1.1ms latency on iphone 13. Overall a great contribution to the research community.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. fast local representation capacity of convolution and the efficient global modeling proficiency of the proposed SHMA\\n2. A series of novel techniques such as stack of overlapping convolution instead of aggressive non-overlapping patch in the early layers\\n3. The model is structured in four stages. The early stages use fast convolution to capture local features efficiently, using a modified and lightweight version of ConvNeXt optimized for mobile latency.\\n4. In the lower-resolution stages, self-attention is used to model long-range dependencies. To address the challenges of traditional multi-head self-attention (MHA), the authors propose SHMA, which uses a single-head attention mechanism to minimize memory costs while retaining high performance. SHMA reduces latency by optimizing reshaping operations and leveraging spatial context interactions. SHMA is combined with a parallel feature extraction branch to enhance feature representation. The outputs from both branches are fused to enable dynamic information exchange, mitigating any performance drop caused by simplifying MHA\", \"weaknesses\": \"1. What is the runtime complexity of iFormer network?\\n2. When running on iPhone (mobile device), what is the peak memory consumption? \\n3. How long the iPhone charge will last if an iFormer based app is run on certain fps?\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer XLkf\", \"comment\": \"We appreciate the reviewer's insightful comments and questions!\", \"q1\": \"We apologize for the mistake. The latency of RepViT-M1.0 is 1.54 ms, rather than 1.64 ms. We have updated this information in Tables 3 and 4 in the revision (highlighted in blue). Additionally, we have conducted a thorough review of all experimental results included in the paper.\", \"q2\": \"We have included these two papers in the related work section and included a comparison of CMT in Table 15. These updates are highlighted in blue in the Section 2.2 Efficient Vision Transformers of the revision.\"}", "{\"title\": \"Response to Reviewer fhTG\", \"comment\": \"We would like to express our gratitude to the reviewer for their insightful comments and questions.\\n> Q1: How does this method compare with neural architecture search (NAS) methods?\\n\\nIn our study, we have compared two NAS methods, namely MNV4-Conv and EfficientFormerV2. iFormer demonstrated superior performance as depicted in Tables 3 and 4. NAS relies on the pre-defined search space of operators with the goal of identifying the optimal combinations and configurations of various operators. The innovative and efficient SHMA block significantly outperforms existing operators, enabling iFormer to achieve a better trade-off between accuracy and latency. As a general methodology, NAS can also be applied to iFormer and propel it towards a new SOTA. \\n\\n> Q2: How does the designed model perform on other mobile devices, such as NVIDIA Jetson Nano or Raspberry Pi?\\n\\nDue to the limitation of available devices and time constraints, we have not evaluated iFormer on a broader range of devices. In the future, we plan to expand the evaluation of iFormer to more platforms, including NVIDIA Jetson Nano and Android devices.\\nIt is important to note that designing a model performing well across multiple hardware platforms is challenging, as varying memory and computational architectures can lead to different behaviors.\"}", "{\"summary\": \"This paper presents a new family of mobile hybrid vision networks, called iFormer, by integrating the fast local representation capacity of convolution with the efficient global modeling ability of self-attention.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is easy to follow, with clear writing and presentation.\\n2. Evaluation results are comprehensive.\", \"weaknesses\": \"1. How does this method compare with neural architecture search (NAS) methods?\\n\\n2. How does the designed model perform on other mobile devices, such as NVIDIA Jetson Nano or Raspberry Pi?\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Nice Response\", \"comment\": \"Thank you for your detailed response. It has clarified some of my questions, and I will increase my score from 5 to 6.\"}", "{\"comment\": \"Thanks you for the response. All of my concerns have been addressed. I will keep my initial score.\"}", "{\"title\": \"Response to Reviewer waKF (Part 2/2)\", \"comment\": \"W2 and Q:\\n\\nAlthough iFormer is designed for mobile-device applications, the combination of fast local representation capacity of convolution and the efficient global modeling proficiency of the proposed SHMA enables its scalability for a broader range of applications.\\nTo demonstrate the scalability of iFormer, we developed a larger model named iFormer-H with 99M parameters and trained it for 300 epochs following the same strategy outlined in Section B of the revision (highlighted in blue). It is important to note that we add drop path and layer scale, which are commonly used in the training of larger models [1,2,3]. The performance results are provided as follows:\\n\\n| Model | Params | GMACs | Top-1 |\\n|:------------------|:------:|------:|------:|\\n| ConvNeXt-Base[1] | 89M | 15.4G | 83.8 |\\n| TransNeXt-Base[2] | 90M | 18.4G | 84.8 |\\n| iFormer-H | 99M | 15.5G | 84.8 |\\n| MaxViT-Base [3] | 120M | 24.0G | 84.9 |\\n\\nA highlight from the results is that iFormer is not specifically designed or trained for this scale. Despite this, iFormer-H outperforms ConvNeXt, achieving a 1.0% increase in accuracy while maintaining a similar number of FLOPs. Additionally, it demonstrates comparable performance to TransNeXt-Base, despite utilizing fewer FLOPs. These findings indicate the potential for broader applications of iFormer. We will also open source this model with source code in the Github and plan to explore larger models suitable for mobile devices in future work.\\n\\n[1] A ConvNet for the 2020s. CVPR 2022\\n\\n[2] TransNeXt: Robust Foveal Visual Perception for Vision Transformers. CVPR 2024\\n\\n[3] MaxViT: Multi-Axis Vision Transformer. ECCV 2022\"}", "{\"metareview\": \"This paper introduces iFormer that combines convolutional neural networks with Transformer-based architectures. Evolving from ConvNeXt, the proposed model integrates Single-Head Modulated Attention to replace parts of the Conv blocks in later stages, aiming to balance efficiency and performance. After rebuttal, all reviewers agree that this paper offers a well-structured and experimentally validated approach to mobile hybrid vision networks. The authors are encouraged to incorporate these updates into their final submission to strengthen the paper\\u2019s clarity, justification of design choices, and overall impact.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion stage, 4 out of 5 reviewers responded to the author's rebuttal and agreed that it addressed their main concerns. Given the positive scores and overall feedback, I recommend accepting this paper as a poster.\"}", "{\"title\": \"Global Response\", \"comment\": \"We would like to express our heartfelt gratitude to all PCs, SACs, and Reviewers for their efforts.\", \"and_we_would_like_to_highlight_our_contributions_as_follows\": [\"The detailed exploration roadmap could inspire further exploration in designing efficient architectures.\", \"Our newly introduced SHMA effectively minimizes memory costs while maintaining high performance.\", \"iFormer outperforms SOTA baselines across a comprehensive range of tasks.\", \"As suggested by the reviewers, we have made the following revisions to the manuscript (including Supplementary Material):\", \"A more comprehensive discussion in the related work section.\", \"A more detailed illustration of SHMA.\", \"Corrections to writing errors related to RepViT and revision throughout the manuscript.\", \"Update the Top-1 accuracy of iFormer-L from 81.7 to 81.9. This improvement was achieved by employing a drop path rate of 0.1, without making any changes to other parameters.\", \"Additional ablation studies on the choice of convolutional blocks versus vit blocks.\", \"Further ablation studies examining scalability.\", \"An extended discussion on future work.\", \"Additional comparison including ablation studies and illustrations in relation to SHViT.\", \"Additional discussion of computation complexity.\", \"We have highlighted all the modifications in blue, which will be removed following the rebuttal process.\"]}", "{\"summary\": \"This paper presents a mobile hybrid vision network, iFormer. The paper goes from ConvNeXt to a lightweight mobile network. iFormer removes memory-intensive operations in MHA and employs an efficient modulation mechanism. The author conduct standard benchmark experiments on ImageNet, COCO and ADE20K.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I think the logic of exploration in this article, starting with ConvNeXt, first \\u201clightening\\u201d the ConvNeXt to create a streamlined\\nlightweight network, then exploring the attention module, is reasonable. \\n\\nI think the analysis about \\u201ccosine similarity between multiples\\u201d proves that using a single attention is good and worth supporting. \\n\\nI think the experiment reported in this paper is comprehensive (imagenet, coco, ade-20k). The paper also reports some knowledge distillation results, which is suitable in mobile network papers.\", \"weaknesses\": \"1:\\nSingle head self-attention has been conducted in \\\"Shvit: Single-head vision transformer with memory efficient macro design\\\" .\\n\\nAlternative to standard self-attention has been conducted in GhostNetV2.\\n\\nModulation in the token mixer module has been conducted in Conv2Former. \\n\\nThis paper references many related methods, and while that is one approach, I don't think it stands out. Although such research is a decent format, I believe it impacts the novelty of this paper.\", \"2\": \"The process of evolving from the ConvNeXt baseline to the lightweight iFormer may not apply to slightly larger models, and some steps show very minimal improvements, making them hard to justify.\", \"questions\": \"The largest model shown by iFormer, iFormer-L, is only about 15M, which isn\\u2019t considered large, even for edge devices, especially since recent edge LLMs can reach 1B parameters. I wonder how well a larger iFormer (around 100M) would perform.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer waKF (Part 1/2)\", \"comment\": \"W1:\\n\\nWe would like to clarify that iFormer is not a simple concatenation of multiple existing methods. Instead, we propose several novel designs aiming at lightening ConvNeXt.\\n> Single head self-attention has been conducted in \\\"Shvit: Single-head vision transformer with memory efficient macro design\\\" .\\n\\n1. Single Head Attention (SHA) differs from its counterpart in SHViT in the following aspects: **In terms of motivation**, iFormer explores efficient attention mechanisms specifically tailored for the on-device environment, whereas SHViT is geared towards general-purpose GPUs, which may exhibit different hardware characteristics.\\n**In terms of methodology**, we utilize single head attention with more channels, while SHViT employs fewer than 1/4 of channels for attention. The reduced number of channels can result in a lower rank of the attention matrix, potentially degrading its expressiveness. Additionally, the split and concatenate operations in SHViT introduce extra runtime.\\n\\nThe experimental results in Table 15 show that iFormer achieves a better trade-off than SHViT.\\n\\nWe also conduct a more fair comparison with SHViT. We use the SHA baseline in Table 1, specifically denoted as 'SHA' in Figure 2. Then we move toward SHViT as follows:\\n\\n| Model | Params | GMACs | Latency | Top-1 |\\n|:----------------------------|:------:|------:|--------:|------:|\\n| SHA Baseline without Modulation | 9.9M | 1758M | 1.12ms | 79.8 |\\n| + split | 9.9M | 1758M | 1.18ms | - |\\n| + attention on 1/4 channels | 8.3M | 1547M | 1.02ms | - |\\n| + concat | 8.7M | 1579M | 1.11ms | 79.5 |\\n\\nThe discussion can also be found in the Section D of the revised version (highlighted in blue). It can be observed that split and concat operations introduce additional runtime. Moreover, the SHA of iFormer demonstrates stronger performance than the counterpart of SHViT under similar latency (79.8 v.s. 79.5). This improved performance may be attributed to the greater number of channels in the attention mechanism\\n> Alternative to standard self-attention has been conducted in GhostNetV2.\\n\\n2. SHMA is fundamentally different from the attention mechanism in GhostNetV2, which decomposes spatial interactions into horizontal and vertical interactions and behaves more like the Mlp-mixer [1]. In contrast, SHMA integrates the single head self-attention with modulation. Besides, the experiments in Table 3 suggest that iFormer demonstrates a superior and more efficient alternative to self-attention than GhostNetV2 (with V3 being the upgraded version). \\n\\n> Modulation in the token mixer module has been conducted in Conv2Former.\\n\\n3. Thanks for bringing Conv2Former to our attention, we include discussions about it in the related work in the revision. Modulation is a general methodology for enhancing dynamic interactions. SHMA distinguishes itself as a novel approach from existing modulation techniques including Conv2Former, VAN, and FocalNet in the direct integration of self-attention into the context branch of modulation. This integration enables SHMA to capture more informative context.\\n\\n[1] Mlp-mixer: An all-mlp architecture for vision. NeurIPS 2021.\"}", "{\"summary\": \"This paper introduces a new family of mobile hybrid vision networks. By integrating the rapid local representation capability of convolution with the efficient global modeling ability of self-attention, the proposed architecture, iFormer, achieves significant performance in classification and several downstream tasks, while maintaining low latency on mobile devices for high-resolution inputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The study of model architecture could inspire further exploration in designing more efficient architectures.\\n2. The paper is well-organized and easy to follow.\", \"weaknesses\": \"1. In Table 1, iFormer-S achieves the same latency as RepViT-M1.0 with slightly fewer parameters, yet in larger variants, iFormer achieves lower latency with substantially more parameters compared to RepViT. What is the reason for this difference?\\n2. Some studies are not included in the comparison or the related wotk section, such as [1, 2].\\n\\n[1] Cmt: Convolutional neural networks meet vision transformers.\\n\\n[2] Learning efficient vision transformers via fine-grained manifold distillation.\", \"questions\": \"please refer to weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Dear Reviewer\", \"comment\": \"As the discussion period is approaching to an end, if our responses have not sufficiently addressed your concerns or if additional clarification is required, please let us know. We sincerely appreciate your dedication to reviewing our work, and we are grateful for your insightful comments and the considerable time you have invested in evaluating our paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer aPLf\", \"comment\": \"We thank the reviewer for the positive feedback!\\n\\n> Q1: What is the runtime complexity of iFormer network?\\n\\nGiven an input $\\\\ \\\\mathbf{x}\\\\in \\\\mathbb{R}^{C\\\\times H\\\\times W}$ and a window size of P $\\\\times$ P, as detailed in Section E, the computational complexity of iFormer is as follows:\\n$\\\\Omega (\\\\text{SHMA}) = 4HWC^2 \\\\text{(QKV and output projection)} +\\nHWC \\\\text{(element-wise product of modulation)} + \\n2P^2HWC \\\\text{(self-attention)}$,\\n\\n$\\\\Omega (\\\\text{FFN}) = 8HWC^2.$\\n\\nIn image classification, we do not utilize window attention since the feature size is 14$\\\\times$ 14 in stage 3 (it equals to the window attention when P=14). In downstream tasks, we adopt a window size of P=16.\\n\\n> Q2: When running on iPhone (mobile device), what is the peak memory consumption?\\n\\nCurrently, we employ the Xcode benchmark tool to evaluate the latency of our models. However, this tool does not support detailed memory consumption. We are actively exploring alternative methods to monitor the peak memory usage of iFormer on the iPhone.\\n\\n> Q3: How long the iPhone charge will last if an iFormer based app is run on certain fps?\\n\\nWe appreciate your suggestions, and we plan to develop an app that provides more comprehensive information when running iFormer on the iPhone, including estimated battery life, peak memory usage, and other relevant metrics.\"}", "{\"summary\": \"This paper designs iFormer, a new family of efficient mobile vision networks combining ConvNet and Transformers. The iFormer evolves from ConvNeXt with a series of efficiency designs.\\n\\nSingle-Head Modulated Attention(SHMA) is proposed as substitutional Transformer blocks to replace part of the Conv blocks in later stages of the enhanced ConvNeXt. SHMA replaces multi-head attention with single-head attention to improve efficiency and introduces a modulation mechanism to boost performance. \\n\\nThe resulting iFormer series achieves the best performance compared with state-of-the-art mobile-level models on different downstream tasks with lower latency.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well-organized and easy to follow. Detailed design specifications and comprehensive experiments enhanced the integrity of the article and demonstrated its contributions.\\n\\nThe main contribution, SHMA, provides a new approach to designing efficient attention and Transformer blocks. The resulting iFormer series outperforms sota baseline mobile networks with stronger performance and lower latency.\", \"weaknesses\": \"W1:\\n\\nThe motivation and necessity of substituting half of the conv blocks at the third stage and all blocks at the last stage into Transformer blocks in ConvNeXt are still not very clear. From Figure 2, changing the conv blocks into SHA blocks gains a 0.4% improvement in performance but is also 0.12 ms (about 10%) slower. I'd like to know further explanation for this design and ablation studies on the choice of stages or different ratios of Conv versus Transformer blocks if possible.\", \"w2\": \"According to the citation of SHViT in this paper, I suppose the SHA refers to the Single-head self-Attention in SHViT design. But in Figure 4, full channels of input (CxHxW) are projected to Q/K/V (CxL) which does not align with the design of SHA in SHViT but looks like the traditional definition of Single-head attention that performs a self-attention on all channels of input using a single head. \\n\\nConsidering there are limited words about the details of SHA in this paper, I would expect further specification of which SHA is used in iFormer and comply with the pipeline figure accordingly.\", \"w3\": \"In this paper, the additional reshaping operations in MHA are considered as the reason for the slower inference speed compared with SHA. But multiple factors have an impact on the runtime speed difference and there's no evidence to support the extra runtime only or mainly comes from extra reshapings. \\n\\nFirst, depending on the code implementation, replacing MHA with Single-head self-Attention may remove the reshaping operation in self-attention, but also introduce additional split and concat operations. And generally, split and concat operations cost more memory and are slower than reshape. \\n\\nSecondly, SHA applies self-attention on fewer channels, which largely reduces the computational cost and speeds up runtime. \\n\\nTherefore I suggest the authors conduct an ablation study or provide empirical evidence to isolate the impact of reshaping operations versus other factors like split/concat operation and reduced self-attention channels on the inference speed. This would help clarify the main factors contributing to SHA's efficiency and provide a more comprehensive understanding of the proposed method.\", \"questions\": \"1. What is the motivation and justification for the necessity of the design that replaces half of the third stage and full last stage conv blocks with transformer blocks? Please refer to Weaknesses 1.\\n\\n2. I wonder what is the performance and latency of MHA as the missing step between 'kernel sz.' and 'SHA' in Figure 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer wKqn (part 1/2)\", \"comment\": \"We thank the reviewer for the thorough and constructive feedback.\", \"w1_and_q1\": \"We would like to clarify that SHA serves as an intermediate design towards SHMA. So changing the convolutional blocks yields a 1.0% improvement.\\n> I'd like to know further explanation for this design and ablation studies ... if possible.\\n\\nWe conduct ablation studies on the choice of Conv versus ViT blocks, which ultimately lead to the architecture of iFormer. We choose the model after enlarging the kernel size as a start point, then we progressively replace the convolutional blocks in Stages 3 and 4. We do not modify Stages 1 and 2 as they have larger spatial dimensions, which significantly increase memory requirements for the self-attention mechanism.\\nWe present these findings here and have incorporated them into the Section 5 (Choice of Conv v.s. ViT Blcoks) of the revised manuscript, highlighted in blue.\\n\\n| Model | Params | GMACs | Latency | Top-1 |\\n|:-----------------------------------------------------------------|:------:|------:|--------:|------:|\\n| Baseline | 9.4M | 1760M | 1.00ms | 79.4 |\\n| Replacing 22% Conv Blocks in Stage 3 as SHA | 9.1M | 1724M | 1.02ms | 79.5 |\\n| Replacing 22% Conv Blocks in Stage 3 as SHMA | 9.2M | 1739M | 1.04ms | 79.6 |\\n| Replacing 50% Conv Blocks in Stage 3 as SHA | 8.8M | 1689M | 1.04ms | 79.5 |\\n| Replacing 50% Conv Blocks in Stage 3 as SHMA | 8.9M | 1712M | 1.07ms | 79.8 |\\n| Replacing 78% Conv Blocks in Stage 3 as SHA | 8.3M | 1635M | 1.12ms | 79.3 |\\n| Replacing 78% Conv Blocks in Stage 3 as SHMA | 8.5M | 1685M | 1.17ms | 79.6 |\\n| Replacing 100% Conv Blocks in Stage 3 as SHA | 7.9M | 1599M | 1.17ms | 78.1 |\\n| Replacing 100% Conv Blocks in Stage 3 as SHMA | 8.3M | 1665M | 1.25ms | 79.0 |\\n| Replacing 100% Conv Blocks in Stage 3 as SHMA and 100% in Stage 4 | 10.0M | 1792M | 1.15ms | 80.4 |\\n\\nSince Stage 4 contains only two blocks, we do not conduct further split for the ratio. As shown in the above table, the ViT block incurs more runtime. By replacing half of the convolutional blocks in the third stage and all blocks in the final stage, we achieve a pretty trade-off between accuracy and latency.\"}" ] }
4ymHtDAlBv
Fast Salient Factor Concentration (FSFC) Recurrent Neural Network for Text Classification
[ "Weihao Xia", "Huachuan Wang", "Qiu Chen", "Junlong Ma", "James Ting-Ho Lo" ]
Models based on Recurrent Neural Networks (RNNs) have been widely employed for text classification tasks. Traditional RNNs primarily emphasize long-term memory capabilities. However, this approach does not fully align with human cognitive learning processes, particularly in the context of classification tasks. The human brain typically extracts essential information relevant to the classification categories, disregards irrelevant details, and compresses the input to accelerate decision-making. Inspired by this, we propose a novel architecture, the Fast Salient Factor Concentration (FSFC) RNN, specifically designed for classification tasks. FSFC dynamically clusters and compresses semantic information by leveraging the short-term memory capabilities of recurrent neural networks. Experimental results demonstrate that FSFC achieves performance comparable to existing RNNs, while significantly improving training efficiency in classification tasks. Based on the YelpReviewFull dataset, FSFC improves accuracy by 1.37% over Long Short-Term Memory (LSTM), while reducing training time by 86%. Additionally, we propose a new evaluation metric, E-score, which integrates both accuracy and time efficiency to comprehensively assess the overall performance of each network.
[ "Text Classification", "Semantic Information Clustering", "Recurrent Neural Network" ]
https://openreview.net/pdf?id=4ymHtDAlBv
https://openreview.net/forum?id=4ymHtDAlBv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sj9AsfmoWg", "jIy1FXfxQ5", "3RYRZH12Uv", "2G44hpkqiG" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730381339140, 1730777241757, 1730710021709, 1731462327720 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11017/Reviewer_rSDc" ], [ "ICLR.cc/2025/Conference/Submission11017/Reviewer_MP7G" ], [ "ICLR.cc/2025/Conference/Submission11017/Reviewer_gMMw" ], [ "ICLR.cc/2025/Conference/Submission11017/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes Fast Salient Factor Concentration (FSFC) RNN, a new architecture for classification tasks, to enhance the processing of crucial information by dynamically clustering and compressing semantic information. The performance on YelpReviewFull proves that FSFC has a higher accuracy with less training time.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper is well-organized, and the writing is fine.\\n\\n2. The paper evaluates the performance of FSFC on four datasets. The experimental results show the effectiveness and efficiency of FSFC.\", \"weaknesses\": \"1. The motivation of this paper is not very clear. I'm not sure what problems this paper is trying to solve (Text classification? An alternative to RNN? An alternative to RNN on Text classification?). It would be beneficial to specify what problems the authors aim to address\\u2014are they focusing on text classification, proposing an alternative to RNNs, or something else? Anyway, the contribution seems limited.\\n\\n2. The idea presented in this paper does not appear particularly interesting. It proposes an alternative to RNNs for text classification, but I believe it lacks significant contributions to the current NLP community. Nowadays, many practitioners favor pre-trained models over RNNs for text classification tasks. Additionally, large language models (LLMs) tend to focus on multi-tasking rather than on single NLP tasks.\\n\\n3. If the goal is to propose a new RNN, conducting experiments only on text classification is insufficient to verify the method's generalization.\\n\\n4. Even for text classification tasks, the models compared in the paper are not comprehensive (e.g., ELMO, BERT, LLMs, and so on). The paper lacks comparisons with these strong methods for text classification. As I mentioned in Weakness 2, these models are precisely the kinds of models that are commonly utilized in the field of text classification today. If the paper focuses on text classification, not comparing with these mainstream models would be unfair.\", \"questions\": \"1. What is the motivation to propose an alternative to RNN for text classification? What problems are you trying to solve? Does \\\"the long-term memory mechanisms of traditional RNNs do not fully align with human cognitive learning processes\\\" really matter? How does it impact the performance in text classification tasks? Could you give us some concrete examples of how the misalignment between RNNs and human cognitive processes impacts performance in text classification tasks?\\n\\n2. What is the contribution of FSFC? The author should discuss how FSFC compares to or complements more recent approaches in NLP. RNN-based methods for text classification are quite outdated now. While I am not against RNNs, achieving minor improvements and reducing training time seems to bring little new knowledge to the current NLP community.\\n\\n3. How about employing other methods like BERT and LLMs? What is the value of an alternative to RNN compared with pretrained language models? Why do we still need RNN-based methods for text classification?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an RNN-based model called Fast Salient Factor Concentration (FSFC), designed to improve training efficiency in text classification by using short-term memory and semantic clustering. While FSFC's concept is intuitive, it has several critical limitations. The approach offers limited innovation, appearing incremental relative to existing RNN and attention-based techniques. Additionally, the research feels outdated, as NLP has largely moved toward transformer-based models. Performance evaluations yield mixed results: FSFC reduces training time but sacrifices accuracy on some datasets. Moreover, baseline comparisons are insufficient, with FSFC only benchmarked against LSTM and GRU models, excluding state-of-the-art transformers like RoBERTa and recent LLMs.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Easy to follow and clearly written;\", \"Lightweight solution for text classification compared to transformer models.\"], \"weaknesses\": [\"Limited novelty: The concept of text compression and segmentation is well-explored, with similar techniques in attention mechanisms and memory simplification.\", \"Outdated approach: It\\u2019s advisable for the authors to employ BERT-based models or LLMs, as transformers are now the NLP standard. Although addressing transformer complexity with RNNs is a good direction, applying it to basic tasks like text classification feels outdated.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper provides a method to remove the gating mechanisms of LSTM and condense memory. The proposed method achieves better training speed while retaining accuracy.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper proposes a structure that is 5 times faster to train than LSTM.\", \"weaknesses\": \"1. It is tough to understand Figure 1. I can not find any RNN or recurrent in this figure. What is your whole network? Moreover, this figure doesn't have detailed captions.\\n\\n2. Why does long-term memory not matter in classification? Please provide justifications with citations or experiments.\\n\\n3. This paper does not compare with other improvements on LSTM. Do other improvements over LSTM reduce the training time and achieve better accuracy?\\n\\n4. This paper does not provide details about the experiment setup. Although we do not know what the network structure proposed in this paper looks like, nor do we know how the configuration compares to GRU and LSTM, or the length of the experimental data.\\n\\n5. E-score is not more effective than just presenting accuracy and time.\", \"questions\": \"1. Does your model reduce the inference time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thank you to all the reviewers for their time and valuable feedback. The original intent of our method was to replace the basic LSTM or GRU modules in various complex models to accelerate training and inference speed. There are aspects of the paper that need improvement, and after further discussion, we have decided to withdraw the manuscript for further refinement. Once again, we sincerely appreciate the reviewers' efforts.\"}" ] }
4y6Q98hJzr
Towards Efficient and No Forgetting Domain Continual Pretraining by Mitigating the Stability Gap
[ "Yiduo Guo", "Jie Fu", "Huishuai Zhang", "Dongyan Zhao", "Yikang Shen" ]
Adapting Large Language Models (LLMs) to specialized domains like medicine and law through domain continual pre-training has become the cutting-edge method. However, contrary to our expectations of immediate gains, we’ve uncovered a surprising phenomenon: a temporary performance drop at the start of the process, followed by a performance recovery phrase. This drop is not only unexpected but remarkably consistent across different model sizes and domains, such as medical and law. To gain a deeper understanding of this issue, we introduce the concept of stability gap—borrowed from visual models dealing with new class classifications—to explain this initial drop in LLM performance. Based on this concept, we hypothesize that the initial performance drop arises from instability in the model’s general abilities, which we further validated through our experiments. We further reveal that this initial instability is intricately tied to training settings that involve distribution shifts. To address this initial instability and enhance LLM performance within a fixed compute budget, we propose one training strategy that reduces the instability by increasing the epoch number, along with two data sampling strategies focused on data quality and corpus distribution. We conduct various experiments on Llama-family models to validate the effectiveness of our strategies in both medical and legal continual pre-training and instruction tuning. For example, our strategies improve the average medical task performance of the OpenLlama-3B model from 36.2\% to 40.7\% with only 40\% of the original training budget and enhance the average general task performance without causing forgetting. Furthermore, we apply our strategies to continually pre-train and instruction-tune the Llama-3-8B model. The resulting model, Llama-3-Physician, achieves the best medical performance among current open-source models and performs comparably to or even better than GPT-4 on several medical benchmarks.
[ "Continual pretraining" ]
https://openreview.net/pdf?id=4y6Q98hJzr
https://openreview.net/forum?id=4y6Q98hJzr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uwNc55Ylqm", "bhSefJzcah", "FdxJxvO6uV", "EWSxG0kPIs", "9yqtkINsOb" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730113765043, 1730720913454, 1734350390932, 1730578248307, 1730339462380 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13381/Reviewer_5wq8" ], [ "ICLR.cc/2025/Conference/Submission13381/Reviewer_KEZA" ], [ "ICLR.cc/2025/Conference/Submission13381/Authors" ], [ "ICLR.cc/2025/Conference/Submission13381/Reviewer_WWTv" ], [ "ICLR.cc/2025/Conference/Submission13381/Reviewer_X3mM" ] ], "structured_content_str": [ "{\"summary\": \"This paper uses the concept of stability gap to explain the initial drop in LLM performance during continual pretraining in a new domain. The authors propose three training strategies to address the initial instability: 1) continually pretrain the LLM on a properly sized corpus subset for multiple epochs; 2) Continually pretrain LLM on a high-quality corpus subset; 3) Using data mixture rate that is similar to the pretraining data. The proposed strategies improve the accuracy of the LLM in the new domain when compared to the existing continual pretraining techniques.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The proposed strategies are easy to implement.\", \"The LLM fine-tuned with the proposed strategies achieved the highest averaged accuracy score on a suite of medical question answering tasks.\"], \"weaknesses\": [\"## Major\", \"It is important to justify the methods of Muennighoff et al. (2024) and Lin et al. (2024) used in this paper. (I assume the four subsequent sentences explain the method (L158-161)). Here are some missing details:\", \"Why was KenLM chosen?\", \"What is a \\\"high-quality medical reference corpus\\\"? how do you define it? This is a fairly critical point because the \\\"highest-quality\\\" medical corpus can also be defined as those that resemble the downstream tasks the most, which makes the findings more expected (the closer the continual pretraining data is to the downstream tasks, the better the model will perform in the downstream tasks).\", \"The authors claim that the average accuracy of the LLM on the medical tasks initially drops and rises during the continual pretraining.\", \"However, the drop itself does not look significant (less than 1% averaged accuracy). This makes the observation less strong. (See Question 2)\", \"This paper contains a flawed assumption due to the lack of access to the pretraining corpus. If a stability gap was proposed to explain the ability of the model to maintain performance on previous tasks, such an analysis cannot be achieved if we do not have access to the pretraining corpus.\", \"The authors claimed (L233-235) that language modelling loss also preserves general knowledge and text modelling capabilities, which is a big assumption that is not backed by any evidence.\", \"Note that text modelling capabilities may still be preserved via language modelling loss during the domain adaptation (continual) pretraining, however, we cannot guarantee that the general knowledge is still being preserved.\", \"Additionally, there is no guarantee that the continual pretraining corpus was not included in the pretraining corpus. To examine this, the authors may have to conduct a pretraining from scratch.\", \"There exists a logical gap between the concepts of relative weight update, stability gradient, and instruction-following ability.\", \"The authors concluded that the relative weight update indicates the stability gradient and, in turn, instruction-following ability (L241-253). However, there is no guarantee that relative weight update relates to stability gradient, let alone instruction-following capability.\", \"Additional experiments using pretraining from scratch may help understand this phenomenon better.\", \"There are several mentions of a \\\"properly sized\\\" subset. However, they are not properly defined.\", \"The performance improvement (Figure 4) when compared to the baseline seems to be <1%. This does not look very significant.\", \"## Minor\", \"Note that the submission and paper titles are different\", \"Abstract is generally filled with jargon which makes it harder to follow.\", \"L50-51: The last sentence of paragraph 1 in the Introduction can benefit from some citations.\", \"L56: Missing citation for \\\"Previous research\\\"\", \"The introduction section still contains a lot of undefined jargon (i.e., \\\"proper size\\\", \\\"highest-quality tokens\\\", \\\"data mixture\\\")\", \"L194: Concluding that the \\\"LLM has acquired medical domain knowledge\\\" based on the perplexity score is a bit of an overclaim. Consider rephrasing it.\", \"Table 2: This misses the performance of the Llama-3-8B models without fine-tuning.\", \"The authors claim that the proposed strategies are computationally more efficient. By how much exactly? What metrics should you evaluate this on?\", \"## Very minor (e.g., typos, etc)\", \"Use consistent verb tense (many inconsistent uses of present and past tenses)\", \"Typo L15: \\\"phrase\\\" -> \\\"phase\\\"\", \"L68: Instead of \\\"harness\\\" perhaps \\\"mitigate\\\" it? since you would like to mitigate the stability gap as opposed to harnessing it.\", \"Typo L125: lowercase \\\"Language models\\\"\", \"Typo L125: \\\"RoBERTa\\\"\", \"Page 4: Perhaps observations 1 and 2 can be swapped because in practice we may not know the downstream tasks during the (continual) pretraining phase.\", \"Figure 6b: The caption does not seem to be correct. The figure seems to show accuracy during law continual pretraining, while the caption is about relative parameter updates during the medical continual pretraining process.\"], \"questions\": [\"1) The stability gap concept proposed by previous studies is about the inability to maintain performance on **prior** tasks and the one mentioned in this paper is about the performance in the **new** target task. How are they two related in your experiments?\", \"2) The initial drop in the averaged accuracy of the LLM on the medical tasks looks very insignificant.\", \"Have you done a statistical test to verify this?\", \"Is the small drop (<1%) in line with the findings of previous stability gap studies?\", \"3) Data Mixture Results (Figure 4b and 4c):\", \"The authors may need to compare the proposed strategies with the baseline (full data with multiple epochs).\", \"The average medical and commonsense performance seems to drop in the 5th epoch. Why is that the case? What would happen if you continue the pretraining to 6th, 7th, ... epoch?\", \"4) How similar is the \\\"high-quality\\\" medical reference corpus to the downstream tasks?\", \"If you run the KenLM model on the downstream datasets, what is the perplexity? Would the perplexity be very low too?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript focuses on the problem of stability gap- i.e., LLMs dropping their performance drastically when continually pretrained on a new domain and then recovering performance gradually. The manuscript demonstrates stability gap using the medical domain using relatively smaller language model and proposes (three) strategies to overcome and stabilize pre-training loss- (1) continual pre-training with a random partition of the domain across multiple epochs (2) continual pre-training using a notion of high-quality tokens selected using KenLM (3) Utilizing existing pre-training data-mixture ratios to selectively replace the current corpora with target domain corpora. The manuscript then applies the strategy to Llama-3-8B- in continually pretrain and fine-tuning settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is reasonably well organized and written.\", \"The findings are well explained and justified with empirical analyses where required.\", \"The authors conduct extensive experimentation to cover different possible research questions\", \"The concept of stability gap is not new and has been extensively studied in Computer vision but relatively less in NLP. The paper draws it's research question from this and possible solutions from CV. The paper compares its proposed strategies with existing work, e.g. Ibrahim et al (Rewarm and decay), Replay (Chen et al) etc.\"], \"weaknesses\": [\"The paper uses DEITA and KenLM for assessing the quality of samples in the target domain.\", \"Need a baseline with only Continually pretrained with all data (all data vs only 50B) vs proposed strategy\", \"Table -1: The\\tperformance vs 10B replay is pretty close. The performance difference seems to solely arise due to MedMCQA;\", \"may need statistical significance tests to see if the differences are due to proposed strategies or due to randomness.\", \"Table 2: >20% performance jump again on MedMCQA for Physician vs LLaMa-3-8B Fine-tuned seems odd. Are there any possible explanations, especially the difference in performance for other datasets <5%. (Please add statistical significance tests- see last bullet)\", \"Performance could be possibly validated using statistical significance tests- either using permutation or signed rank tests. see- https://aclanthology.org/D12-1091/\"], \"questions\": [\"Line 288: `... of each sample in the entire medical corpus.' what does each sample indicate (documents, QA pairs or anything else?)? Are the samples drawn from the only the dataset being evaluated or all of them combined?\", \"How was the 50B domain text obtained from wiki-medical-terms? The website seems to indicate that the corpus has 6k medical terms and their descriptions. Does the whole terms + description have 50B tokens? Any other relevant statistics?\", \"The paper's main contribution seems to arise due to creating the High Quality (HQ) partition using KenLM- Could the authors add more information about how this was performed? For e.g., what were size of _n_ if an n-gram-based approach was used?\", \"Creating HQ partition could have been done in other ways- entropy, ranking or using MLE for importance. Can the authors comment why KenLM was chosen. Can they compare this selection with others? Do they work in similar ways/show similar performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper suggests that performance instability when training LLMs for specialized domains arises from distribution shifts. As such, they propose a new continual pre-training strategy that incorporates data quality and corpus distribution to identify \\\"better\\\" samples. In addition, the idea is to use these better subsets of samples and train for more epochs to ensure the LLM is in the performance recovery phase. The authors illustrate their performance on 4 benchmark QA datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper derives insights from stability gaps introduced in the context of visual models for continual learning to explain the behavior of performance drops with LLM continual pre-training for the specialized domain.\", \"Evaluation results with various biomedical-domain fine-tuned LLMs and QA datasets demonstrate the potential of the strategy.\", \"For some tasks and datasets, there is a noticeable improvement using less number of training tokens, especially on the MedMCQA task.\"], \"weaknesses\": [\"The base architecture used is only the OpenLlama3B model with a single parameter size. The natural question is whether such a strategy is applicable across various LLM families and sizes (for example, GPT-NeoX was used by Ibrahim et al. with a 10B parameter model which might be comparable to the 8B rather than the 70B). Can you provide a comparison against GPT-NeoX 10B to provide a meaningful evaluation of your strategies?\", \"The motivation for the learning strategy is under a fixed computational budget, which seems to be only related to the number of training tokens and not the number of epochs. Can you explicitly define computational budget and then evaluate a scenario where token count and epoch count are kept constant to better understand the tradeoffs when considering a computational budget? This is a more elaborate setting than Section 3 which only assessed 5 epochs.\", \"The methodology, efficient learning strategies, and evaluation sections, all seemingly blend together without necessarily a coherent story or separation of sections. For example, in section 3, the differences between the two subsections seem to blend together whereas it would have been better to introduce the stability gap and demonstrate that the instability that is often observed seemingly is explained in the context of this, and should be done for one common set of experiments (note that there is swapping between medical domain, common sense task performance but with very little context for these experiments until Section 5). Section 4 seems to be more of an ablation study rolled in with their own method. As such, while it seems like the authors have done a lot of reasonable experiments, untangling what they are introducing and evaluating is very hard to understand without multiple reads. My suggestion would be to reorganize so that both section 3 and 4 are one contiguous section, where the first subsection focuses on motivating the stability gap in the context and then providing the strategy to mitigate this by choosing higher-quality samples. Section 5 can then focus on experiments where they are concisely targeting specific aspects of the strategy.\", \"There are a lot of results, but limited discussion about them, especially comparison of performance. Please provide a more detailed discussion of your results. Moreover, it would be helpful to clarify which fine-tuned models may not be tuned on the same task so the performance might be hindered by this, whereas others might be fine-tuned on the task so it might be reasonable to expect them to do well. To accommodate this expansion, space can be made by shrinking some of the figures.\", \"Some of the graphs do not provide sufficiently more information. For example Figure 2 (b) reports the beginning of only 1 model for the millions of tokens, but the trend doesn't seem to be that much more informative than Figure 2a. Similarly, much of the motivation was for specialized domain but there is only a focus on medical domain whereas it would have been more compelling with Appendix B results embedded here.\"], \"questions\": [\"Why is the performance substantially better for your strategy on MedMCQA? The tasks, performance gains seem more mixed and not necessarily as beneficial. What about MedMCQA benchmark makes it benefit the most from the continual pre-training?\", \"Does this technique work for other datasets? In looking at the legal dataset results in Appendix F, there are similar findings suggested for the zero-shot but the experimental comparisons.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the LLM behaviors when adapting to specific domains via continual pre-training. Authors point out unexpected \\\"stability gap\\\", which is an initial drop in performance before subsequent recovery. Authors provides three training strategies on this unexpected trend and conduct experiments on medical and law benchmarks.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper's motivation is clear, with well-structured on problems in continue pertaining, proposed strategies, and results.\\n2. Authors conducts experiments on different benchmarks across medical and law, to show the effectiveness of proposed methods in the continue-pretraining.\", \"weaknesses\": \"1. The proposed strategies seems similar with existing works conclusions. For examples, high quality data is important for model training [1,2] , using similar data mixture rate to the pre-training data to alleviate data distribution shift [3,4].\\n2. The experiments only conduct on relatively small models. The gap may be due to the the small model is not robust enough on the new dataset. It is. unsure that if the larger models ( for example, 13B, 70B ) meet the same issue on continue pertaining.\\n3. The IFT model comparison is unfair to me due to some IFT models do not tuned on specific training dataset and they have different base models. \\n4. It is unsure that if the proposed IFT models is overfitting into the evaluation dataset by building IFT dataset based on original training data.\\n\\n\\n[1] Chen at al (2023). AlpaGasus: Training a Better Alpaca with Fewer Data\\n[2] Zhou et al (2023). LIMA: Less Is More for Alignment\\n[3] Parmar et al (2024). Reuse, Don't Retrain: A Recipe for Continued Pretraining of Language Models.\\n[4] Ibrahim et al (2024). Simple and Scalable Strategies to Continually Pre-train Large Language Models\", \"questions\": \"1. I am not clear that how the high quality is obtained from original medical corpus.Can you further explain the quality evaluation metric for the data selection?\\n2. The figure 4 (a) is not clear to me. What's the x-axis represent? Can you further explain this Figure 4(a) and your finding? \\n3. The mixture strategy confused me. Can you further explain the mixture strategies? Specifically, \\n\\\"we follow the Llama mixture rate (Touvron et al., 2023a) to collect 5 billion tokens initially. We then replace the CC and C4 data (82% of the 5 billion tokens) with medical tokens sampled from the highest quality 5 billion medical tokens (HQ-5b). \\\" \\nWhat's the initial 5 billion tokens ? How you further replace the token. \\n4. Is stability gap existing on larger models? like 13B or larger models? Could you further conduct experiments on larger model to show the importance of the proposed issue?\\n5. Strategy 1 trains more epochs on smaller dataset may have higher chance to overfit. Can you further compare the continual training's performance on other OOD benchmark to show the overfitting issue (e.g. DROP, GSM8K, HumanEval etc).\\n6. Does the ' stability gap' changed by using different learning rate and warm-up strategies?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
4y4t7yOvJO
POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator
[ "Eugenio Lomurno", "Samuele Mariani", "Matteo Monti", "Matteo Matteucci" ]
Neural Architecture Search (NAS) automates the design of neural network architectures, minimising dependence on human expertise and iterative experimentation. While NAS methods are often computationally intensive and dataset-specific, employing auxiliary predictors to estimate architecture properties has proven extremely beneficial. These predictors substantially reduce the number of models requiring training, thereby decreasing overall search time. This strategy is frequently utilised to generate architectures satisfying multiple computational constraints. Recently, Transferable Neural Architecture Search (Transferable NAS) has emerged, generalising the search process from being dataset-dependent to task-dependent. In this domain, DiffusionNAG stands as a state-of-the-art method. This diffusion-based method streamlines computation, generating architectures optimised for accuracy on unseen datasets without the need for further adaptation. However, by concentrating exclusively on accuracy, DiffusionNAG neglects other crucial objectives like model complexity, computational efficiency, and inference latency -- factors essential for deploying models in resource-constrained, real-world environments. This paper introduces the Pareto-Optimal Many-Objective Neural Architecture Generator (POMONAG), extending DiffusionNAG through a many-objective diffusion process. POMONAG simultaneously considers accuracy, the number of parameters, multiply-accumulate operations (MACs), and inference latency. It integrates Performance Predictor models to estimate these secondary metrics and guide the diffusion gradients. POMONAG's optimisation is enhanced by expanding its training Meta-Dataset, applying Pareto Front Filtering to generated architectures, and refining embeddings for conditional generation. These enhancements enable POMONAG to generate Pareto-optimal architectures that outperform the previous state-of-the-art in both performance and efficiency. Results were validated on two distinct search spaces -- NASBench201 and MobileNetV3 -- and evaluated across 15 image classification datasets.
[ "Neural Architecture Search", "Many-Objective", "Pareto-Optimal", "Meta-Dataset", "Transferable Neural Architecture Search" ]
Reject
https://openreview.net/pdf?id=4y4t7yOvJO
https://openreview.net/forum?id=4y4t7yOvJO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zXhpTg4TTR", "um4gALyOYt", "oTz2BjoRaZ", "oC6prOZl54", "my6CpT4gIg", "lGzoLuTw3C", "kD5ywhuu6e", "jhgPioy3K0", "grGGH2DgTK", "ghrn26tUTy", "eMLqdgmxUP", "dqGKQj59eF", "ZCZhYk2OP1", "UveP2uLlZI", "RLzGnGMkGT", "Qvl1oGBSHz", "OhIdKSOwxJ", "HyGwzE9PuE", "HphRBmYz5H", "ElgwxPIDTc", "BWSl7HZrbL", "BAdvNjVL4S", "6WNnptOc5w", "1eBUC8xc69" ], "note_type": [ "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737523656542, 1730407151744, 1732101695071, 1732656915708, 1732631797417, 1732629896840, 1732344940530, 1731584861602, 1732027106009, 1735020116611, 1730721607100, 1732629991651, 1732101812672, 1731600571096, 1731600585157, 1732438330438, 1730714380992, 1730561277469, 1732632120591, 1732625473344, 1732101924528, 1731504789829, 1730597839913, 1732531213829 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_hqyM" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_hqyM" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_w9TN" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Area_Chair_cvfB" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_SujD" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_1FYP" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_w9TN" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_1FYP" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_s4FH" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Authors" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_s4FH" ], [ "ICLR.cc/2025/Conference/Submission4704/Reviewer_SujD" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The work presents an extension to DiffusionNAG and incorporates multi-objective search. Model complexity, computational efficiency, and inference latency are key measures captured through number of parameters, MACs, and latency estimation. These measures are recorded in a meta dataset for NASBench201 and MobileNetV3 with 10k and 20k architectures respectively. During search, pareto front filtering segments three regions corresponding to high accuracy, high efficiency, and best balance of the two using the auxiliary metrics from earlier. The experimental results are promising across a sufficiently diverse set of benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Strong writing, ideas are explained well and thorough\\nThe experiments are presented well and results are thorough\\nNovelty is presented in 2 algorithmic improvements and the contribution of a multi-objective meta dataset\", \"weaknesses\": \"For transferable NAS, the choice of benchmarks are interesting, TransNASBench provides a NAS dataset specifically for transferability in NAS. Exploring performance on this dataset would have been nice\\nMobileNetV3 and NB201 are also fairly dated search spaces, performance in more recent search space or architecture styles (vit) should be explored\\nThe specific details of the algorithmic contribution are a bit vague. How is pareto front filtering done? \\nImageNet results are sparse and comparison to modern NAS methods on this benchmark are sparse\", \"questions\": \"How did you choose the search spaces to apply POMONAG?\\nThe algorithmic contribution seems like a limited extension of DiffusionNAG. What complication arose from integrating multi-objective NAS into DIffusionNAG?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are very grateful for the reviewer's contribution and dedication, as well as for his valuable comments and time.\\n\\nWe would like to point out the first clear aspect of our work. The correlation between secondary metrics varies significantly depending on the type of architectures considered. For architectures with traditional layers such as Conv2D and Dense, observed in NASBench201, there is a natural correlation between metrics and MACs, as documented in Appendix A. However, this relationship becomes more complex in MobileNetV3 architectures, which employ advanced components specifically designed to reduce parameters while maintaining expressiveness.\\nFor example, 2D Depthwise Convolutions and Inverted Residual Bottlenecks are designed to drastically reduce the number of parameters by factoring convolutional operations. These blocks, although \\u2018light\\u2019 in terms of parameters, require a more complex sequence of operations: a Depthwise Convolution followed by point-wise convolutions and normalisations. Similarly, the Squeeze and Excitation blocks introduce an attention mechanism with few parameters but multiple global pooling operations and channel transformations.\\nInference time, in particular, has even more independent dynamics in both search spaces. Operations such as pooling or activations, while having minimal parameters and MACs, can have variable execution times depending on their hardware-specific implementations. This lack of direct proportionality between the metrics highlights the need for many-objective optimisation to effectively balance these aspects, which are complementary with accuracy, and not aligned with each other.\\n\\nThe contributions of POMONAG extend significantly beyond the simple adaptation of DiffusionNAG. The Many-Objective Reverse Diffusion Guidance framework (lines 162-215) introduces an innovative approach that harmonises competitive gradients, ensuring stable convergence during both training and inference. This result is particularly relevant considering that the optimised metrics (accuracy, parameters, MACs and latency) not only conflict with each other in many cases, but also exhibit non-linear relationships that vary significantly depending on the type of architectural blocks used.\\nOur two-stage optimisation strategy (lines 324-333) represents a key theoretical advancement through Pareto Front Stretching. This approach allows us to effectively explore regions of the solution space with naturally different scales, generating both highly efficient architectures and highly accurate models. The systematic exploration framework dynamically balances these conflicting goals, identifying Pareto-optimal solutions that would be difficult to achieve with traditional techniques.\", \"the_innovations_introduced_produce_substantial_practical_impacts\": \"the Performance Predictors show a significant improvement (Spearman correlation from 0.687 to 0.855), while POMONAG achieves higher accuracy on both NASBench201 (+4.06%) and MobileNetV3 (+2.67%) with significantly improved efficiency (up to -90% parameters, -93% MACs). The validation on 15 different datasets with single training cycles demonstrates not only the effectiveness of the method, but also its computational practicality.\\nThe extensive empirical validation confirms that POMONAG introduces fundamental innovations in the field of neural architecture search, overcoming the limitations of previous approaches through many-objective optimisation integrated in the diffusion process. This opens up new directions for the generation of architectures that effectively balance accuracy and computational efficiency.\"}", "{\"comment\": \"Thank you to the authors for their detailed response. After reviewing the comments from other reviewers and the authors' replies, I have decided to maintain my original score. Upon reflection, my initial review remains generous given the collective feedback and responses. I appreciate the authors' thorough efforts in addressing the concerns and wish them success in their future revisions.\"}", "{\"comment\": \"It is possible that we did not understand each other correctly, but we now believe we have understood the questions:\\n1. The ranges are the generous contours of the scaling value identified for the diffusion process used in DiffusionNAG (equal to 10000). For the secondary metrics, these values were adjusted to scale with the accuracy factor.\\n2. We are deepening the formulation of the diffusion process and filtering by Pareto front. This will probably be a dedicated section in the appendix.\\n3. We apologise if the reviewer was unable to identify our answer, we will be more precise with a numbered list in the future. Our answer was the following: \\\"As far as the version of POMONAG optimised only for accuracy is concerned, this achieves performance in the neighbourhood of the best models obtained with the many-objective version, but with more parameters, MACs and inference time. This is an expected result since the search spaces, although large, are constrained by the family of architectures they describe. It is therefore natural that there are solution sets with very close performance; the main challenge lies in balancing this performance with secondary metrics that tend to be in antithesis.\\\". From this clarification, we understood that he meant a two-pronged optimisation. This would go beyond the optimisation process we sought, however, seeing the little difference in accuracy between single-objective and many-objective researched architectures, we expect a result close to that obtained with the current configuration. It is possible, however, to obtain slightly better trade-offs by optimising for only two objectives, the problem to be solved being simpler.\\n\\nIn conclusion, we apologise for any misunderstandings, and thank you very much for your time and suggestions. We will treasure them.\"}", "{\"comment\": \"We understand. We thank the reviewer for the time invested, and will treasure his comments.\"}", "{\"comment\": \"Thank the authors for the response. After reading other reviewers' comments and the corresponding replies from the authors, I would like to maintain the score. Some key concerns have not been addressed, e.g. lack of novelty and significance in contribution, lack of evidence for wide applicability and lack of clarity on how the Pareto front is created. The linear combination of four objectives makes POMONAG less of a multi-objective approach.\"}", "{\"comment\": \"We thank the reviewer for his time and valuable comments.\\n\\nOur contributions extend significantly beyond adapting DiffusionNAG. The Many-Objective Reverse Diffusion Guidance (lines 162-215) introduces a novel framework that harmonises competing gradients, ensuring stable convergence during both training and inference - a non-trivial achievement given the complexity of balancing multiple conflicting objectives simultaneously.\\nOur two-phase scaling optimisation approach (lines 324-333) represents a key theoretical advancement, introducing Pareto Front Stretching for effectively navigating solution spaces with disparate scales. This systematic exploration framework enables discovery of truly Pareto-optimal solutions.\", \"these_innovations_yield_substantial_practical_impacts\": \"Performance Predictors show marked improvement (Spearman correlation from 0.687 to 0.855), whilst POMONAG achieves superior accuracy on both NASBench201 (+4.06%) and MobileNetV3 (+2.67%) with significantly enhanced efficiency (up to -90% parameters, -93% MACs). These gains, validated across 15 datasets with single training cycles, demonstrate meaningful progress towards deployable architectures.\\n\\n\\n\\nPOMONAG builds upon established Transferable NAS research for image classification, following seminal ICLR works (MetaD2A, TNAS, DiffusionNAG) that established MobileNetV3 and NASBench201 as standard benchmarks. Our validation extends significantly across 15 diverse datasets, providing thorough comparison with previous approaches and state-of-the-art DiffusionNAG.\\nExploring transformer spaces like HW-GPT-Bench presents an intriguing future direction. However, our current focus remains on advancing image classification NAS, where we demonstrate substantial improvements within this well-defined research trajectory.\\nThe ablation study in Appendix C rigorously demonstrates each component's independent contribution - from enhanced Performance Predictors to Meta-Datasets - validating POMONAG's innovations beyond DiffusionNAG's structure.\\nDue to space constraints, we present comprehensive performance metrics in Table 5, demonstrating POMONAG's advantages across accuracy, parameters, MACs and latency. The complete Pareto front visualisations are available in Appendix B.\\n\\n\\nWe appreciate your feedback on the paper's structure. While positively received by reviewers w9TN, s4FH and hqyM, we acknowledge room for improvement. The structure - related work, contributions, method, experiments, results and discussion - follows a format we've found effective, though we understand preferences vary.\\nOur frequent DiffusionNAG references aimed to highlight POMONAG's innovations rather than repeat established foundations. As the latest TransferNAS advancement, POMONAG naturally builds upon previous work while introducing substantial novel contributions.\\nWe are particularly grateful for noting the font size error caused by an uncommented /footnotesize from page 4. This technical oversight has been promptly corrected, ensuring full ICLR compliance within the 10-page limit. The camera-ready version will reflect this correction.\", \"regarding_scaling_factors\": \"These values balance multiple objectives during diffusion. Starting from the reference value 10000, we systematically explored and calibrated ranges for secondary metrics (lines 342-346), optimising for expected accuracy on the Pareto front (lines 327-355). The impact is significant: scaling factors critically influence the Pareto front shape by shifting the sampling centroid. Given their importance, we shall provide detailed analysis in the camera-ready version.\", \"regarding_sampling\": \"POMONAG generates 256 Gaussian noise tensors, guided by predictors estimating multiple metrics during diffusion. After filtering inconsistent configurations, remaining architectures are evaluated for parameters, MACs and latency, forming a Pareto front that enables selection based on efficiency, accuracy/metric ratio, or peak accuracy (lines 309-325). We use original dataset splits where available, otherwise creating stratified validation sets (seed 42). Multi-objective optimisation employs Tree-structured Parzen Estimation with Hyperband pruning via Optuna's default configuration.\\n\\n\\n\\nWe sincerely appreciate your thorough review and valuable external perspective. The points highlighted have been comprehensively incorporated into the manuscript, enhancing its depth and clarity. We are particularly grateful for noting the font size error, which was immediately rectified.\\nWe trust that this technical oversight significantly influenced the initial assessment. Given our thorough clarifications and responses, alongside the extensive experimental validation conducted over years of research - which aligns with and extends the standards established in previous editions of this conference - we hope you might reconsider your evaluation.\\nThank you for your detailed guidance in strengthening our submission.\"}", "{\"comment\": \"We thank the reviewer for the valuable comments and insights on our paper.\\n\\nWe recognise that the formulation of the Reverse Diffusion Guidance Process, being a major contribution, could be presented at the beginning. We have carefully considered this modification, which would make one of the key steps clearer. In the methods, we chose to briefly anticipate the DiffusionNAG formulation, focusing on single-target search, and then move the focus to many-objective inductively. The choice to place the related works before the method, and not at the bottom as is usual, stems from the need to provide the reader with a clear view of the neural architecture search context and the operation of DiffusionNAG, which may not be immediate. For this reason we have opted for this format, while recognising the validity of the alternative.\\n\\nWe also find the recommended approach of using four separate diffusion processes to be subsequently balanced by many-objective optimisation extremely interesting. This would undoubtedly be a path to explore. However, the result would inevitably differ from that proposed in this paper. It would likely require longer generation times in proportion to the number of metrics to be considered. In any case, it remains an extremely valid route.\\n\\nWe appreciate the constructive feedback on the need for a more in-depth analysis regarding the transition from single to multi-objective, despite the fact that it was described in two separate places in the paper (lines 161-215 and 302-323). In the revised version, we will include an analysis of convergence properties and the impact of weights on goal balancing.\\n\\nAs far as the predictors included in this work are concerned, they have not been gone into in detail as they retain the DiffusionNAG architecture, the difference being that they have been adapted and dedicated to the prediction of secondary metrics. In the camera-ready we will specify this aspect in more detail, which will certainly be useful to the reader.\\n\\nRegarding the scaling factors, their search range was deliberately set wide ([1000,5000] for accuracy, [100,500] for the other objectives) to allow a complete exploration of the solution space through many-objective optimisation. This choice is motivated by two key considerations:\\n- The different objectives (accuracy, parameters, MACs, latency) operate on naturally different scales, requiring proportionally different scaling factors to balance their contribution in the diffusion process.\\n- The use of Tree-structured Parzen Estimation with Hyperband pruning allows to efficiently explore this space, quickly identifying promising regions and discarding sub-optimal ones.\\nAs demonstrated in our experiments (lines 324-333), this strategy led to convergence towards optimal values for both search spaces, generating Pareto-optimal architectures that effectively balance the different objectives. The empirical results in Appendix B confirm the robustness of these values across multiple datasets.\\n\\nAs far as the version of POMONAG optimised only for accuracy is concerned, this achieves performance in the neighbourhood of the best models obtained with the many-objective version, but with more parameters, MACs and inference time. This is an expected result since the search spaces, although large, are constrained by the family of architectures they describe. It is therefore natural that there are solution sets with very close performance; the main challenge lies in balancing this performance with secondary metrics that tend to be in antithesis.\\n\\nThe methodological differences between DiffusionNAG and POMONAG are discussed in section 3.1 (lines 161-215). The extension from single-objective to many-objective involves the introduction of new terms in the diffusion process for parameters, MACs and latency. Each additional term requires its own dedicated Performance Predictor to guide the deployment process towards optimal architectures with respect to that specific criterion. The main challenge, addressed through the Many-Objective Reverse Diffusion Guidance framework, was balancing these competitive gradients while maintaining process stability. We recognise that this analysis could be deepened. In the camera-ready version, we will add a more detailed discussion on how the interaction between the different gradients influences the diffusion process and the convergence towards Pareto-optimal solutions.\\n\\nIn conclusion, we sincerely thank the reviewer for the valuable comments, and trust that the changes to the article and the clarifications may lead him to positively revise his assessment.\"}", "{\"metareview\": \"This study proposes POMONAG, extending DiffusionNAG for multi-objective optimization. While the method is clearly presented with experiments across 15 datasets, reviewers noted limited novelty, as the approach largely builds on DiffusionNAG with minor extensions. The lack of comparisons with state-of-the-art multi-objective and zero-shot NAS methods, along with insufficient detail on ablations and hyperparameter selection, weakens the contribution. The AC agrees with these concerns and recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers consistently raised concerns about limited novelty and insufficient comparisons with recent methods. Despite the authors\\u2019 rebuttal, key issues remain unresolved.\"}", "{\"summary\": \"This paper introduces POMONAG, an extension to DiffusionNAG that applies a many-objective diffusion model to optimize neural architecture generation for many-objective optimization. By incorporating additional performance predictors for hardware efficiency metrics such as number of parameters, multiply-accumulate operations (MACs), and inference latency, POMONAG aims to provide a more balanced approach to architecture optimization across accuracy and computational efficiency. Experiments validate POMONAG\\u2019s efficacy on two major CNN search spaces (NASBench201 and MobileNetV3).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation to extend DiffusionNAG to a many-objective setting is valid and POMONAG does so by incorporating both accuracy and efficiency metrics like latency and MACs, which are critical for resource-constrained environments. The paper provides extensive experimental comparisons with DiffusionNAG, including evaluations across multiple datasets and search spaces, which helps demonstrate the general applicability of POMONAG.\\nBalancing the different objectives being optimized is also very important in my opinion. The authors do so by proposing a pareto front filtering and stretching subroutine.\", \"weaknesses\": \"I have the following main concerns related to this submission, which I believe were crucial in the final decision:\\n\\n- **Incremental Contributions**: Although POMONAG claims to extend DiffusionNAG\\u2019s capabilities by addressing more objectives, the modifications appear incremental and lack substantial theoretical advancement. More specifically, I see the adaptation of diffusion models to accommodate multiple objectives, as described in section 3.1, more as a technical modification rather than a novel conceptual framework. I would recommend the authors to reiterate over their methodology and pinpoint the main contributions of their approach.\\n\\n- **Experimental Evaluation**: The benchmarks that POMONAG was evaluated contain only CNN spaces. It would be beneficial for the paper if the authors would demonstrate the efficacy of POMONAG in Transformer search spaces, such as the one from HW-GPT-Bench [1]. Most importantly, in the multi-(many-)objective experiments, the proposed method is not compared to any baseline. I would recommend the authors to add baselines in their experimental evaluation and report hypervolume indicator together with the individual objective values, as well as the search time. Ultimately, I would also be interested in visualizing the pareto front plots in the main paper. As for baselines, you can find a non-exhaustive list of simple ones in SyneTune (https://syne-tune.readthedocs.io/en/latest/getting_started.html#supported-multi-objective-optimization-methods). Finally, the experiments lack a thorough ablation study that demonstrates the impact of POMONAG\\u2019s unique contributions independently of DiffusionNAG\\u2019s foundational structure. \\n\\n- **Clarity and Presentation**: The paper seems to have a somehow fragmented structure, making it challenging for readers to follow the main contributions and crucial take-away points. Equations are not thoroughly explained, and there is a heavy reliance on citations from DiffusionNAG rather than a detailed elaboration of POMONAG itself, making the paper not self-contained. One major point here, which I have also pointed out to the AC, is that the authors have used a smaller font size starting from page 4. The guidelines clearly state that the maximum page limit is 10 and that means 10 pages with the default font size, not a smaller one. I suggest the authors that in future submissions they adhere to the submission guidelines.\\n\\n\\n-- References --\\n\\n[1] Sukthanker et al. HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models. In NeurIPS 2024 DBT\", \"questions\": [\"Moreover, I have the following questions:\", \"Can the authors provide more theoretical or empirical justification for the scaling factors in the Pareto Front Stretching process? How sensitive is the model to these values?\", \"Can the authors provide more detail on the architecture sampling process, dataset splits, and hyperparameter tuning methods used in the experiments? This is particularly important for the performance predictors.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate the time and effort invested by the reviewer in evaluating our work. The detailed comments and feedback are valuable, and we will incorporate these insights into our future research. We thank the reviewer for the thorough review.\"}", "{\"comment\": \"Optimising the scaling factors k\\u03d5, k\\u03c0, k\\u03bc and k\\u03bb is a crucial aspect of POMONAG. Starting from the 10000 reference value used in DiffusionNAG for k\\u03d5, we developed a systematic approach to calibrate these parameters that balance the different targets during the diffusion process.\\nGiven the inherently different nature of the targets, we defined two complementary search interval configurations. To generate accuracy-oriented architectures, we explored a wide range [10000, 50000] for k\\u03d5, while maintaining smaller ranges [10, 50] for secondary metrics. Conversely, to prioritise efficiency, we reduced the range of accuracy to [1000, 5000] while increasing that of the other metrics to [100, 500], thus allowing for greater influence of efficiency goals during generation.\\nOptimisation of these intervals was carried out through Tree-structured Parzen Estimation with Hyperband pruning, using Optuna for 100 evaluations on a representative set of datasets (CIFAR10, CIFAR100, Aircraft, Oxford III Pets). This process identified optimal values for both search spaces, as detailed in lines 324-333.\\nThe resulting scaling factors significantly influence the shape of the Pareto front by changing the sampling centroid during the diffusion process. The robustness of these values is empirically confirmed through extensive testing on different datasets, as documented in Appendix B. This systematic optimisation process ensures an effective balancing of potentially conflicting objectives, allowing POMONAG to generate architectures that satisfy different design priorities.\\n\\n\\nDetails on the computational costs of POMONAG are presented at the end of Section 4 (lines 469-479). We thank the reviewer for pointing out the need to also specify the training times of the score network and predictors, which we will include in the camera-ready. These times are strongly related to both the type of encoding used for the meta-dataset and its cardinality.\\nOn a single QUADRO RTX 6000, training requires:\\n- Score Network: ~9h for NASBench201, ~19h for MobileNetV3\\n- Performance Predictors: ~8.5h for NASBench201, ~18h for MobileNetV3\", \"these_times_can_be_significantly_optimised_through_various_strategies\": [\"With a cluster of four QUADRO RTX 6000s, due to the increased batch size, score network training times are reduced to 1.5h for NASBench201 and 2.5h for MobileNetV3\", \"Higher-performance hardware such as the A100 can achieve similar or better results by utilising both higher TFLOPS and available memory\", \"Optimised encoding of the meta-dataset makes it possible to maintain short times despite the increase in cardinality\", \"As far as generation is concerned, POMONAG requires:\", \"5:45 minutes on NASBench201\", \"18:15 minutes on MobileNetV3\"], \"these_times_are_particularly_competitive_when_compared_to_other_recent_approaches\": \"- MeCo [1]: 115 minutes on CIFAR10 (GPU not specified)\\n- SWAP-NAS [2]: 6 minutes on CIFAR10/NASBench201 (on Tesla V100, superior hardware)\\n- ZiCo [3]: 10 hours on NVIDIA 3090 (significantly slower despite superior hardware)\\nIt is important to emphasise that the pre-training of architectures is an integral part of the search space and available online, so it is not an additional overhead. Considering that the training time is a one-off and that POMONAG shows competitive performance even on non state-of-the-art hardware, we believe that these computational costs represent a significant advantage over alternatives in the literature. POMONAG's ability to achieve superior results with lower computational resources highlights the efficiency of our approach.\\n\\nA significant advantage of POMONAG lies in its ability to drastically reduce the number of architectures to be trained, thanks to the interaction between Performance Predictors and Pareto Front. The generated architectures are evaluated by estimating their accuracy using the predictors and directly calculating the other metrics (parameters, MACs, latency), allowing a complete Pareto Front to be constructed prior to any training phase.\\nFrom this front, a single optimal architecture can be selected based on priorities: the most efficient, the one with the best accuracy-efficiency trade-off, or the one with the highest estimated accuracy. This approach represents a substantial advantage over alternative methods that, lacking accurate predictors and a view of the Pareto front, are forced to train a batch of candidate models to identify the optimal one.\"}", "{\"comment\": \"We thank the reviewer for investing his time and valuable advice.\\n\\nThe advantages of using POMONAG over previous state-of-the-art work in generating architectures for image clustering lies in the gfeneration of more accurate architectures with less complexity in terms of parameters, macs and inference time. Furthermore, thanks to the selection on the Pareto front, it is possible to specify the willingness to take the best performing architecture in terms of secondary metrics, the one with the best balance between estimated accuracy and these metrics, or simply the architecture with the best estimated architecture.\\nThe Pareto front is generated from the 256 architectures produced at the end of the POMONAG generation phase. For these architectures, parameters, macs and inference times are calculated, while accuracy is estimated. From these values, Pareto fronts are created, on which the candidate architecture is then selected to be returned according to the user's requirements - whether they want the most efficient structure, the one with the best balance, or the one that simply performs best (lines 327-355).\\nIn general, the optimisation and generation process is totally many-objective, and it is a weighted sum of loss function components, as in practically all works that aggregate different optimisation constraints. So, to reiterate, generation and training are many-objective. As for the creation of Pareto fronts, these are then generated from the same architecture generation. Parameters and macs tend to be fairly consistent with them, while they deviate from inference time in terms of architecture ordering. However, it is possible to identify clusters in the sense that the architectures with fewer parameters and operations tend to be the fastest at inference. For this reason, we preferred not to implement a non-dominated sorting process, typical in NSGA algorithm families. In any case, we thank you for the very interesting and certainly timely food for thought. In the camera-ready we will make all necessary changes to make this clear.\\n\\nThe methods suggested for further comparisons were carefully considered, but they operate on different search spaces that would make the comparison meaningless:\\nSWAP-NAS uses NASBench-101/201/301 and TransNAS-Bench-101. On NASBench201, in common with our work, Spearman correlation coefficients are reported - with similar performance to ours, albeit in a different setup - but not test accuracy.\\nZiCo was evaluated on NASBench101, NATSBench-SSS/TSS and TransNASBench-101, while MeCo on NAS-Bench-101, NATS-Bench-TSS, ATS-Bench-SSS, NAS-Bench-301 and Transbench-101.\\nWe had already considered including these works in our comparative analysis, but felt that the differences in the search spaces did not allow for a methodologically correct comparison.\\n\\nPOMONAG belongs to the TransferNAS family which is substantially different from zero-proxy methods. TransferNAS methods leverage pre-trained model knowledge to guide architecture search, identifying promising patterns and reducing exploration of suboptimal designs. While generally faster, zero-proxy methods evaluate candidates without training using metrics like FLOPs and parameter counts, potentially missing important performance characteristics that only emerge during actual training. POMONAG is distinguished also inside the TransferNAS family by the use of a diffusion process that allows for a denser stochastic exploration of the generation space due to its energy-based nature.\\n\\nWith regard to time complexity, we have reported the generation times for the search spaces studied and the training times for the generated models, which vary according to the priority of efficiency or performance. A direct comparison with the suggested methods would be inaccurate due to the different search spaces and hardware used.\\nHowever, analysing the available data:\\n- POMONAG needs 5:45 minutes on NASBench201 and 18:15 minutes on MobileNetV3\\n- MeCo takes 0.08 GPU days (115 minutes) on CIFAR10 (GPU not specified)\\n- SWAP-NAS, on Tesla V100, takes 6 minutes on CIFAR10/NASBench201 (+4.35% compared to POMONAG, on higher hardware)\\n- ZiCo takes 0.4 GPU days (10 hours) on NVIDIA 3090, being significantly slower despite superior hardware\\nThese comparisons, although approximate and estimated to the disadvantage of POMONAG, highlight the superior efficiency of our approach. We are grateful for this opportunity for analysis, which allowed us to highlight the advantages of POMONAG even over other NAS techniques not initially included in the study.\\nThe diffusion process is carried out for 10000 steps, as in the reference work, and this corresponds to the achievement of convergence in a very expansive environment characterised by the steadiness of the oslution. Reducing the number of steps too much would lead to even shorter generation times, and we will certainly investigate this further. Increasing them showed no improvement in performance.\"}", "{\"comment\": \"Thank you for your detailed comments. We apologise for having to split the response into two, but the issues included by the reviewer were numerous, and the answers must be comprehensive.\", \"we_address_the_issues_raised_point_by_point\": [\"Links will be added after acceptance, as their earlier inclusion would have compromised anonymity. We thank for the valuable comments on the figure and confirm that we are already working on its improvement for camera-ready.\", \"We have clarified that \\u2018noisy architecture\\u2019 means architecture that has not yet been cleaned of noise, i.e. the matrix representation of the architecture during the diffusion process. We have added an explanatory line for this definition.\", \"The equations have been numbered as required and we have added the necessary definitions, correcting the reported typo. Given the equivalence of concepts, we have renamed everything as \\u2018Reverse Diffusion Guidance Process\\u2019 to avoid confusion. At Line 183 we present the process equation proposed by An et al., from which POMONAG's formulation is extended - we have added a clarifying sentence. We have also specified that s_\\u03b8(A_t,t) represents the diffusion step applied to architecture A at time t.\", \"The decision not to use predictors for parameters, MACS and inference time in denoised architectures is motivated by greater efficiency and accuracy in direct calculation. Estimates are only used during generation, when a concrete architecture does not yet exist. The only metric necessarily estimated post-generation is accuracy, which is essential for the Pareto front and the selection of the architecture to be trained.\", \"The choice of ViT-B-16 is amply justified in the appendix (lines 972-1010), demonstrating an improvement in the Spearman correlation of predictors.\", \"As explained, this work fits into a well-defined and structured strand of literature on image classification (MetaD2A, TNAS, DiffusionNAG - all ICLR). The suggested metrics, while interesting, apply to different tasks and research spaces. We appreciate the suggestion for future developments.\", \"We regret reading the final considerations, especially considering that the article clearly demonstrates how POMONAG represents the state of the art in accuracy in the research strand under consideration, requiring the training of a single architecture and being faster even in the generation phase than the models cited by the reviewer. The three metrics analysed serve to investigate a crucial dimensionality of the dissemination process: they capture the percentage of valid architectures generated (validity), their heterogeneity to prevent collapses (uniqueness) and diversity with respect to the training set to assess generalisation (novelty). Sub-optimal values in these metrics would compromise the entire generation process, while the positive results obtained confirm their validity.\", \"We are deeply grateful for the reviews and the many valuable suggestions received. We believe we have clarified many potentially ambiguous points and supplemented the required information to make the camera-ready even more meaningful. With sincere gratitude, we hope that the reviewer will positively reconsider his assessment, in light of the provided demonstrations on the scientific validity, superior performance, computational efficiency and innovativeness of POMONAG.\"]}", "{\"comment\": \"I sincerely appreciate the feedback from authors. However, I still have concerns in the following aspects:\\n\\n1) For Weakness 1, the authors listed some cases where the secondary metrics demonstrate conflict relationships. However, some specific cases are still not really convincing to me. The more general clarification for this point is still needed.\\n2) I cannot find the computational cost analysis in lines 504-507. Maybe lines 469-479?\\n3) I reserve my opinion in terms of the experiments on ImageNet-1K. Even though the time is limited in the rebuttal period, I still encourage the authors to include this set of experiments in the future work.\\n\\nI decide to keep my initial rating.\"}", "{\"summary\": \"This study improved DiffusionNAG by introducing a multi-objective approach which modifies DiffusionNAG's reverse diffusion process as a reverse diffusion guidance process. Other than accuracy, #params, MACs and inference latency are also considered in the multi-objective metrics. The proposed method POMONAG has been tested on NASBench201 and MobileNetV3 with 15 image classification tasks, showing better performance than DiffusionNAG and a series of other methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of this study, introducing multi-objective evaluation in NAS, is commendable as a task in reality is often not just about accuracy. Other metrics should be considered simultaneously as well.\\n\\nThe writing is easy to follow. \\n\\nIt is nice to see equations with highlights of different colours.\", \"weaknesses\": \"**First of all**, the work claims to be on Pareto multiobjective search for architectures. However, that point is not obvious from the paper.\\n* What are the benefits of using the proposed POMONAG? \\n* How can a Pareto front be generated and utilized? Need to explicitly demonstrate how POMONAG generates and utilizes Pareto fronts.\\n* How can users select architectures from the Pareto front according to their needs or under different circumstances? Show examples of such selection based on different priorities, for example prioritizing small-size architectures for portable devices or focusing on latency reduction etc. \\n* It seems non-dominated sorting is absent. Explain how non-dominated sorting is incorporated or can be incorporated in POMONAG.\\n* In its current form, the paper reads like a combination or integration of single-objective evaluations rather than a multi-objective evaluation. The equation of POMONAG at Line 209/210 is a linear combination of four objectives. Please clarify if the linear combination of objectives is intended as a scalarization approach. If so, discuss its limitations.\\n--- \\n**Secondly**, the performance of POMONAG appears better than DiffusionNAG and other methods shown in the paper. However many SOTA methods, especially zero proxy methods are missing. Their reported performance is similar or even better, for example, SWAP-NAS by Peng et al, ICLR'24, ZiCo by Li et al, ICLR'23, MeCo by Jiang et al, NeurIPS'23.\\n* Include a comparison with these SOTA methods. If a direct comparison is not possible, explain why and discuss the limitations of the current evaluation.\\n* Discuss how POMONAG's approach differs from or improves upon zero-proxy methods. \\n\\n--- \\n**Thirdly**, the computational cost aspect of POMONAG is weak. The section \\\"Generation and Training Time\\\" should be better presented. The method requires a diffusion generation phase which takes extra time. That itself is a disadvantage. Also timewise, POMONAG cannot claim superiority as recent methods mentioned earlier are faster.\\n* Present a detailed table comparing computational costs (including generation and training time) of POMONAG with other methods, including these zero proxy methods mentioned above. Seemingly these methods are faster. If POMONAG is indeed slower, discuss potential optimization strategies.\\n* Discuss the trade-offs between the additional diffusion generation phase and the method's performance gains. Justify why the additional computational cost might be worthwhile.\\n--- \\n**Other points:**\\n \\nThe link at Line 091 is showing. Also, including the code and dataset would be helpful for the assessment.\\n\\n--- \\n\\nFig 1 is not quite readable. The figure further makes POMONAG look like three single-objective tasks combined rather than a four-objective task.\\n* Improve readability, especially on the right-hand side.\\n* Better illustrate the integration of all four objectives in a unified multi-objective framework if these objectives are not just simply added together (*see the first part of my comments*).\\n * Provide a clearer visual representation of how POMONAG handles the trade-offs between objectives (*see the first part of my comments*).\\n--- \\nLine 186, the term noisy architecture is not explained. \\n* Provide a brief explanation of what \\\"noisy architecture\\\" means in this context and how it relates to the diffusion process in DiffusionNAG.\\n--- \\nEquations and their connection to the processes/algorithms are not numbered and not clearly explained. \\n* Number all equations for easy reference\\n* Clearly label the equation at Line 183. Is this equation for the Reverse Diffusion Process? Clarify that connection.\\n* Provide a brief explanation of the symbols used in this equation and other key equations.\\n* Explain the purpose of transformation s_\\u03b8(A_t,t).\\n* Explain the exact differences between the Reverse Diffusion Process and the Reverse Diffusion Guidance Process.\\n--- \\nLine 280, \\\"Four are dedicated to the respective estimation of accuracy, parameters, MACs, and inference latency of noisy architectures during the diffusion phase. \\\" \\n* Explain why not use these four metrics for denoised architectures as well.\\n* Justify the point that the denoised architecture uses accuracy as its only metric.\\n--- \\n\\nExplain the reason why POMONAG utilises Vision Transformer ViT-B-16 instead of other models (Line 286).\\n\\n--- \\nIt is good to see the Spearman correlation experiment. That is very important in NAS studies. However, for a thorough comparison of correlation, it should be done on a set of tasks like NAS-Bench-Suite-Zero (Krishnakumar et al. NeurIPS'22).\\n* Perform a similar thorough comparison comparing correlations on different tasks using different search spaces.\\n--- \\nIn lines 400-402, the same latex problem appeared several times, ` not ' for the left quotation marks, Accuracy, Params, MACS ... \\\\\\n* Fix these formatting issues.\\n--- \\nValidity, uniqueness and novelty are nice metrics for a population of solutions but not so critical for tasks that focus on accuracy and speed. What is the point of being excellent on these points but without good accuracy and speed?\\n* Explain the significance of these three additional metrics: validity, uniqueness and novelty.\\n* Show example how these measures can help improve the quality of generated architectures in POMONAG.\", \"questions\": \"See above as the questions are mostly addressing the weakness of this paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the POMONAG method to generate neural architectures in the multi-objective manner. Specifically, the overall framework of POMONAG is designed based on that of DiffusionNAG, in order to achieve better performance in terms of number of parameters, MACs, and inference latency beyond the accuracy. There are four key parts designed to achieve this goal, i.e., the many-objective reverse diffusion guidance, the meta-dataset, the score network and performance predictors, and the pareto front filtering and stretching. The experimental results in NAS-Bench-201 and MobileNetV3 search spaces demonstrates the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The idea the overall framework of the proposed POMONAG method is simple and easy to understand.\\n2) The details of the method and experiments are clearly stated. \\n3) Generating neural architectures in the multi-objective manner is an important research topic.\", \"weaknesses\": \"1) My major concern is about the motivation of this work. Specifically, there are four objectives considered, i.e., the accuracy, the number of parameters, MACs, and the inference latency. However, the last three objectives do not demonstrate conflict relationship. For instance, the smaller number of parameters seems certain to lead to lower inference latency. In this case, the necessity for adopting multi-objective optimization is limited.\\n2) The novelty of the proposed method needs further discussion. Specifically, the proposed method seems to build on DiffusionNAG with the cooperation of the multi-objective optimization. It seems that the POMONAG is just a simple combination of these methods. More discussions in terms of the seminal contribution of POMONAG is needed. \\n3) How the hyperparameters $k_{\\\\phi}$, $k_{\\\\pi}$, $k_{\\\\mu}$, and $k_{\\\\lambda}$ determined? It is suggested to provided more details in terms of the hyper-parameter study for these hyperparameters. \\n4) The search cost of POMONAG is not well presented. In the pipeline of POMONAG, I think the pre-training process, the training of the score network, and the training for the performance predictors will introduce much additional search cost beyond the architecture generation. However, I cannot find any details about the overall search cost and the search cost for the above components. \\n5) I am curious about why only one trained architecture is enough for POMONAG? Maybe more discussions or analysis are helpful to give more insights for this point. \\n6) Lack of experimental results on more challenging tasks (i.e., the classification accuracy on ImageNet-1K). More results on such datasets are helpful to enhance the experiments.\", \"questions\": \"Please see the weaknesses. If the concerns raised are well addressed, I am glad to increase my rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the auditor again for his time.\", \"regarding_the_last_remarks\": \"1. The reviewer will forgive us for the correction, but the specific case in which there are strong correlations is only one; in general, the search spaces above NASBench201 - although even in this one parameters/MACs and inference times are not absolutely linearly correlated - show strong heterogeneity between parameters, MACs and inference times.\\n2. The reviewer is right, let us correct the reference to the old version of the article.\\n3. We accept the observation.\\n\\nThanks again.\"}", "{\"comment\": \"Thank the authors for the response. However, I still have concerns in the following aspects:\\n\\n1, Maybe the authors have not fully understood my first question, which is [How to decide the scaling factors?]. I guess the authors just arbitrarily choose several valuse of scaling factors, but that sounds not sufficient. Indeed, such tradeoff between different objectives might be challenging, but still needs some methods. If I am wrong, please correct me.\\n\\n2, The authors claimed that, the theoretical analysis would be added in camera ready version, but they do not provide the some clue for that analysis or proof.\\n\\n3, The authors ignored my second question, [how about the performance of POMONAG if just considering adding one factor?]. In fact, I think extending the diffusion model from one factor to two factors might be more easy to provide the insight beneath this work.\\n\\nIn summary, I agree that this idea is good and natural and important, but this work needs more verification. So I will keep my initial rating.\"}", "{\"comment\": \"We fully understand the reviewer's remark about validation on ImageNet-1K, which certainly represents a significant benchmark. However, we deliberately opted for a broader and more diverse validation strategy. Instead of focusing on a single dataset, however authoritative, we significantly extended our analysis from four datasets (as in most works in this line of research) to fifteen different datasets.\\nThis choice is motivated by recent discussions in the literature questioning the exclusive use of ImageNet as a universal benchmark, pointing out the limitations of its ability to assess the true generalisation of image classification models [4, 5]. We believe that validation on a wider range of datasets offers a more complete and robust perspective of POMONAG's capabilities, while acknowledging the validity of the reviewer's suggestion.\\n\\nWe apologise for the long answers, but we wanted to be sure that we provided all the information requested in a clear and comprehensive manner. We hope we have cleared up any possible doubts and, remaining at your disposal for any further requests for clarification or in-depth analysis, we thank the auditor once again, and trust that he will be able to re-evaluate our work positively. \\n\\n[1] Jiang, T., Wang, H., & Bie, R. (2024). MeCo: zero-shot NAS with one data and single forward pass via minimum eigenvalue of correlation. Advances in Neural Information Processing Systems, 36.\\n[2] Peng, Y., Song, A., Fayek, H. M., Ciesielski, V., & Chang, X. SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS. In The Twelfth International Conference on Learning Representations.\\n[3] Li, G., Yang, Y., Bhardwaj, K., & Marculescu, R. ZiCo: Zero-shot NAS via inverse Coefficient of Variation on Gradients. In The Eleventh International Conference on Learning Representations.\\n[4] Recht, B., Roelofs, R., Schmidt, L., & Shankar, V. (2019, May). Do imagenet classifiers generalize to imagenet?. In International conference on machine learning (pp. 5389-5400). PMLR.\\n[5] Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., ... & Gilmer, J. (2021). The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 8340-8349).\"}", "{\"comment\": \"We are most grateful for the reviewer's thorough analysis and recognition of our work's quality.\\n\\nRegarding the Pareto Front filtering (lines 299-304), this occurs after architecture generation. For each architecture, we compute the parameters, MACs and inference latency, whilst using the predictor to estimate accuracy. From these Pareto fronts, we identify three configurations per secondary metric: the most efficient architecture (lowest metric), the most balanced (optimal accuracy/metric trade-off), and the most accurate (highest predicted accuracy). This approach provides practitioners with clear options suited to different deployment scenarios.\\n\\nThe foundational works - MetaD2A (Lee et al., ICLR 2021), TNAS (Shala et al., ICLR 2023) and DiffusionNAG (An et al., ICLR 2024) - were all published at ICLR and establish MobileNetV3 and NASBench201 as standard benchmarks. Whilst POMONAG builds upon this established research trajectory, we have substantially expanded the validation across a broader range of datasets to demonstrate wider applicability.\\n\\nOur contribution extends well beyond enhancing DiffusionNAG. A primary innovation is the formulation of Many-Objective Reverse Diffusion Guidance, which elegantly balances four distinct gradients during generation. The optimisation of these gradients presented unique challenges: the scaling factors operate across vastly different scales, whilst maintaining convergence and architectural quality. We addressed this through a novel two-phase approach (lines 324-333) optimised via Hyperband pruning.\\n\\nThe Performance Predictors underwent significant redesign, yielding marked improvements in Spearman correlation (from 0.687 to 0.855). The expanded Meta-Dataset properly supports multi-objective optimisation, whilst our Pareto-optimal filtering identifies three practical configurations (Acc/Bal/Eff) suited to different deployment contexts. The empirical results validate these contributions conclusively: POMONAG surpasses DiffusionNAG in both accuracy (+4.06% on NASBench201) and efficiency metrics, with remarkable reductions in parameters (90%) and MACs (93%).\\n\\nWe might also note that POMONAG achieves these improvements whilst requiring only a single architecture to be trained per dataset, significantly reducing computational overhead compared to prior approaches.\\n\\nWe trust these clarifications address the points raised and demonstrate the substantial nature of our contributions. We are grateful for the reviewer's careful consideration and hope these explanations enable a fuller appreciation of the work's merit.\"}", "{\"summary\": \"This paper is a direct extension based on DiffusionNAG, which can deal with multi-objective optimization in NAS. These objectives include accuracy, the number of parameters, multiply-accumulate operations (MACs), and inference latency. This motivation is good and natural, and the authors expressed their work clearly, from the motivation to the experiments results. Some details need to be clarified.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces the ParetoOptimal Many-Objective Neural Architecture Generator (POMONAG), extending DiffusionNAG through a many-objective diffusion process. POMONAG simultaneously considers accuracy, the number of parameters, multiply-accumulate operations (MACs), and inference latency. The experiments validate the performance of the proposed model.\", \"weaknesses\": \"1, The multi-objective optimization problem formulation in this work can be given first, which then can be solved by the proposed weighted factors in the reverse diffusion process. But maybe the authors can consider other ways to sovle this. For example, using four single reverse diffusion process each targeting one factor, as DiffusionNAG did, then using multi-objective optimization for further trade-off may also work well.\\n\\n2, The theoretical analysis should be strehghen. One objective to many objective is a breakthrough, but such process needs more analysis or discussion. Current work lacks such in-depth thinking. \\n\\n3, Several predictors are needed in this work, but the detailed information these predictors are missing.\", \"questions\": \"I have several questions about this work.\\n\\n1, How to decide the scaling factors? Since the intervals are [1000,5000], [100,500], [100,500], [100,500], and the values seesm to be integer, then the whole factor space equals 4000 * 400 * 400 * 400, which is quite huge. And the authors present one setting for NASBench201 and other experiments, respectively, so I am wondering whether there is some method or strategy to choose such factors? \\n\\n2, This work extends the basic motivation of DiffusionNAG, which is rather good and natural. Such extension include three more factors, including number of parameters, number of MACs, and the inference latency. But I am curious that, how about the performance of POMONAG if just considering adding one factor? \\n\\n3, From one factor, say, accuracy, to three more factors seems strenghening the proposed POMONAG, but my question is, the working mechanism of DiffusionNAG and POMONAG the same different? Although the two diffusion processes consider different factors, which is the obvious difference, but the analysis or discussion is important to interpret this issue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response. I think the submission needs a careful thorough evaluation of the proposed algorithm in multi-objective settings if the authors claim that it is a multi-objective optimization method. As I still see from the experiments section, there is no multi-objective baseline considered in the benchmarks. After also reading the other reviewers' comments, I decided to increase the score to 3 (was 1 because of the smaller font size), because I think this submission is not ready to be publishable and needs a more thorough iteration.\"}" ] }
4xbwWerxvZ
Consistency Model is an Effective Posterior Sample Approximation for Diffusion Inverse Solvers
[ "Tongda Xu", "Ziran Zhu", "Jian Li", "Dailan He", "Yuanyuan Wang", "Ming Sun", "Ling Li", "Hongwei Qin", "Yan Wang", "Jingjing Liu", "Ya-Qin Zhang" ]
Diffusion Inverse Solvers (DIS) are designed to sample from the conditional distribution with a pre-trained diffusion model an operator and a measurement derived from an unknown image. Existing DIS estimate the conditional score function by evaluating operator with an approximated posterior sample. However, most prior approximations rely on the posterior means, which may not lie in the support of the image distribution and diverge from the appearance of genuine images. Such out-of-support samples may significantly degrade the performance of the operator, particularly when it is a neural network. In this paper, we introduces a novel approach for posterior approximation that guarantees to generate valid samples within the support of the image distribution, and also enhances the compatibility with neural network-based operators. We first demonstrate that the solution of the Probability Flow Ordinary Differential Equation (PF-ODE) yields an effective posterior sample with high probability. Based on this observation, we adopt the Consistency Model (CM), which is distilled from PF-ODE, for posterior sampling. Through extensive experiments, we show that our proposed method for posterior sample approximation substantially enhance the effectiveness of DIS for neural network measurement operators (e.g., in semantic segmentation). The source code is provided in the supplementary material.
[ "Diffusion model", "Inverse problem" ]
Reject
https://openreview.net/pdf?id=4xbwWerxvZ
https://openreview.net/forum?id=4xbwWerxvZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ubAYuxguTh", "uFHUxpJOm6", "tZWfZSfDxw", "s8htYPjq9M", "o5pZtwDpBn", "hxSOs9W9We", "b53ghj9aHo", "RuxxuR55O3", "OpoN3ttQxb", "HNamkXjzUx", "GG4PmbE9Yv", "DWze237SuX", "CX2oUYTg9I", "4is4t3Myru", "4NvMNfc96a" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732372932761, 1734968398450, 1732373020499, 1732373002225, 1737523536433, 1730678081818, 1732372908161, 1732372963402, 1732499011129, 1730710995390, 1730756872113, 1732525433434, 1732875029931, 1732619287547, 1730645519175 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2857/Authors" ], [ "ICLR.cc/2025/Conference/Submission2857/Area_Chair_71oA" ], [ "ICLR.cc/2025/Conference/Submission2857/Authors" ], [ "ICLR.cc/2025/Conference/Submission2857/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_cxtg" ], [ "ICLR.cc/2025/Conference/Submission2857/Authors" ], [ "ICLR.cc/2025/Conference/Submission2857/Authors" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_h6fG" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_kz1a" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_h6fG" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_kz1a" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_LTYd" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_cxtg" ], [ "ICLR.cc/2025/Conference/Submission2857/Reviewer_LTYd" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your detailed comments. It appears that there have been some misunderstandings, and we have provided our responses to your questions below.\\nWe hope that our response can clarify the misunderstanding and you can reassess our contribution.\\n\\n## W1 why in (1) the authors formulate the Markov chain following a VE formulation?\\n\\nThe result of this paper is heavily based on stochastic differential equation (SDE) and probability flow ordinary equation (PF-ODE). The classic literature that introduces SDE and PF-ODE adopts VE notation [Score-Based Generative Modeling through Stochastic Differential Equations] and we follow the tradition in this paper. Besides, the VE and VP formulations are essentially equivalent and can be made interchangable by linear scaling. In popular projects such as Huggingface/diffusers, VP and VE diffusions are implementated in a unified way: VE diffusion solvers (Euler) can be applied to solve a VP diffusion (Stable Diffusion 2.0), and vice versa.\\n\\n## W2 the paper in Section 3.4 discusses that the CM is overfitting to f(.) and proposes an approach to make the framework robust:\\n\\nWe would like to emphasize here that No CM is trained for $f(.)$. Our approach works for the zero-shot setting, which is the main motivation of the paper.\\n\\n## Q1: Is CM trained with knowledge of f(.), or if this is a misunderstanding?\\n\\nThis question is related to the W2, and we would like to clarify that NO new CM is trained with the knowledge of f(.).\\nWe use existing unconditional CMs which do not know f(.).\\n\\n## Q2: If the CM is trained with f(.), please explain and justify comparing it to methods like DPS that don't use this information?\\n\\nAgain, CM is not trained with the knowledge $f(.)$.\\n\\n## W3 Lack of thorough literature\\n\\nThanks for the pointer. We will cite the reference [Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors] and discuss its relation with our work.\\n\\nHowever, we would like to note that reference provided is a concurrent work: it will not be formally published until Dec 2024, which is after the deadline of ICLR 2025. Hence, we think not discussing this paper should not be considered as lack of thorough literature.\\n\\n## W4 The paper needs improvement in presentation\\n\\nThanks a lot for the comments and suggestion. We have polished the presentation as follows in the revised version:\\n* We have removed the notations in the abstract.\\n* We have added the definition of consistency models in introduction.\\n* We have included the implementation details of Table 1.\\n* \\\"neural network operators\\\" means the measurement operators $f(.)$. We have rephrased it as \\\"neural network measurement operators\\\".\"}", "{\"metareview\": \"The paper explores the use of consistency models (CMs) to address inverse problems in diffusion models by replacing the traditional expectation computation step with a CM-based approach. This allows for faster sampling from the probability flow ODE (PF-ODE) while avoiding costly differentiation steps. The proposed method demonstrates improvements over prior approaches, highlighting an interesting application of CMs in solving inverse problems.\", \"strength\": \"The paper discusses an interesting question, which tries to address the limitations of existing methods. Although it has been shown in previous works, the idea of leveraging CMs for solving inverse problems is interesting and hasn\\u2019t been explored too much in the literature. It may open some interesting directions for diffusion-based inverse problem-solving.\", \"weakness\": \"Reviewers raise several common concerns, mainly on the justification why the additional CM is necessary in solving the inverse problems. Specifically, The approach introduces computational overhead, as it requires training both a diffusion model and a CM, which could be a significant constraint given the difficulty in training CMs. Additionally, the quantitative results show limited improvement over existing methods like DPS, and the comparison with more recent baselines would be more appreciated. Besides, the mathematical formulation and paper presentation could be further improved.\\n\\nOverall, although the target problem is interesting, the paper may need a major revision to improve clarity and method justification. Thus, I recommend reject.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, all reviewers have responded to the author\\u2019s rebuttal and some also followed up with more questions. But there is only one round of response from the author without following up to address the further concerns. With no more discussion, I think these concerns still remain in the current version.\"}", "{\"comment\": \"Thank you for your detailed review. We are pleased to provide answers to your questions:\\n\\n\\n## W1 The manuscript could benefit from improved clarity and organization\\nThanks a lot for the detailed comments and suggestions. \\nWe have fixed the typos in the revised version of the paper.\\n\\n* The $\\\\zeta_t\\\\Delta(f(\\\\mathbb{E}[X_0|X_t]), y)$ should be the gradient of that distance: We changed it into $\\\\zeta_t\\\\nabla\\\\Delta(f(\\\\mathbb{E}[X_0|X_t]), y)$.\\n* The authors should think about introducing a clear distinction between the $p(X_0|X_t)$ and $p(X|y)$: We have fixed it by calling the $p(X_0|X_t)$ posterior and $p(X|y)$ conditional distribution.\\n* The abbreviation DPS is used but never introduced: we defined it in page 6 upon its first appearance.\\n\\n## Q1 why $p_{\\\\theta}(y|X_0)$ contains $\\\\theta$?\\n* Thanks for pointing it out. $p(y|X_0)$ is independent of $\\\\theta$, and hence there should be no subscript. We have fixed it in the revised version.\"}", "{\"comment\": \"Thanks for your detailed comments. It appears that there have been some misunderstandings, and we have provided our responses to your questions below.\\n\\n## W1 consistency models since these are quite difficult to train and pre-trained CMs are not widely available\\nCM is one of the most widely adopted approach to distill diffusion models. Training large scaled CM has been studied extensively (see e.g., [Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models] [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference]), and currently there are several high quality CMs that are easily accessible online:\\n* CM for ImageNet 64, LSUN Cat and Bedroom 256: https://github.com/openai/consistency_models\\n* CM for Stable Diffusion 1.5, SD-XL and PixArt alpha: https://github.com/luosiallen/latent-consistency-model\\n\\n## W2 the justification for the method is rather weak\\nIn DPS, we only care about whether the approximated sample follows $p(X_0|X_t)$ or not. As long as the samples follows $p(X_0|X_t)$, DPS estimate the conditional score perfectly (Eq 5), and thus achieve sampling from $p(X_0|y)$. The $p(y|X_0)$ part in $p(y|X_t)$ will be handled by gradient ascent of DPS. Even if $p(y|X_0)=0$ for some initial $t$ and $X_0$, the gradient ascent will make it large for subsequent $t$.\\n\\n## Q1 The presentation of Proposition 3.3 is a bit misleading\\nThanks for the comments. We have included $\\\\sigma < \\\\frac{1}{\\\\sqrt{4 \\\\pi e}}$ in the assumption.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper is about solving inverse problems using a pre-trained denoising diffusion prior. Posterior sampling with diffusion models requires an estimate of the score $\\\\nabla _{x_t} \\\\log p_t(x_t) + \\\\nabla _{x_t} \\\\log \\\\int p(y | x_0) p _{0|t}(x_0 | x_t) d x_0$. While the first term is estimated using the pre-trained score, the second term is usually very difficult to estimate accurately. A common approximation used in the literature involves using $\\\\nabla _{x_t} \\\\log p(y | E[X_0 | X_t = x_t])$ where $E[X_0 | X_t = x_t]$ can also be estimated using the pre-trained score via Tweedie's formula. This approximation results in many efficiencies well documented in the literature and this paper tries to circumvent them using by using a sample from the PF-ODE as a replacement. Specifically, the authors use consistency models to speed up the process of sampling from the PF-ODE and ensuring that the differentiation is not costly.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea is quite original wrt to the literature. The paper is quite illustrated and clear. The experiments are interesting and extensive; the authors compare to many existing methods, on both pixel space and latent space diffusion.\", \"weaknesses\": [\"The most obvious weakness of the method is the use of consistency models since these are quite difficult to train and pre-trained CMs are not widely available.\", \"In my opinion the justification for the method is rather weak. The paper argues that using a sample from the PF-ODE is valid because the sample has zero density and that furthermore, for the Gaussian mixture example considered, the PF-ODE sample has non-zero density under the posterior $p(x_0 | x_t)$ with high probability. At the same time it is also easy to find examples of a likelihood function $p(y|x_0)$ such $\\\\int p(y|x_0) p(x_0 | x_t) d x_0 > 0$ for all $x_t$ but $p(y|\\\\Phi(t, x_t)) = 0$ for $x_t$ in a set of positive Lebesgue measure. As a result, I'm not totally convinced that the argument is strong.\"], \"questions\": \"The presentation of Proposition 3.3 is a bit misleading. The authors should state in the assumption the condition on $\\\\sigma$ that they use in the proof in order to get a lowerbound independent of the dimension, or remove this claim after the proposition.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Revisions\", \"comment\": [\"Summary of Revisions\", \"Thank you for your detailed review. We have uploaded the revised paper, with all revisions marked in blue. Below is a summary of the revisions:\", \"We have removed notation from the abstract, as suggested by h6fG.\", \"We have rephrased \\\"neural network operators\\\" to \\\"neural network measurement operators,\\\" as suggested by h6fG.\", \"We have specified the $\\\\sigma$ value in the assumption, as suggested by cxtg.\", \"We have modified $\\\\zeta_t\\\\Delta(f(\\\\mathbb{E}[X_0|X_t]), y)$ to $\\\\zeta_t\\\\nabla_{X_t}\\\\Delta(f(\\\\mathbb{E}[X_0|X_t]), y)$, as suggested by LTYd.\", \"We have distinguished between $p(X_0|X_t)$ and $p(X|y)$ by referring to the former as the posterior and the latter as the conditional distribution, as suggested by LTYd.\", \"We have added the definition of the consistency model and diffusion posterior sampling, as suggested by LTYd and h6fG.\", \"We have removed $\\\\theta$ from $p_{\\\\theta}(y|X_0)$, as suggested by LTYd.\", \"We have included implementation details for Table 1, as suggested by h6fG.\", \"We have cited and discussed additional literature, as suggested by h6fG and kz1a.\", \"We hope that we have addressed your concerns, and we are glad to provide additional clarifications if needed.\"]}", "{\"comment\": \"Thank you for your detailed review. We are pleased to provide answers to your questions.\\nWe hope that our response can clarify the misunderstanding and we kindly ask you to reassess our contribution.\\n\\n## W1 Using CM over the existing inverse problem solving adds a lot of computational burden\\nWe would like to clarify that the additional time and space complexity introduced by using CM is fairly small and should not be considered a significant burden. In Table 4 we have shown that using CM increase time complexity by 30% and increase RAM usage by 1GB. In Table 3 and Figure 5, it is shown that using CM improve DPS a lot, both quantatively and qualitatively. \\n\\nThe mentioned paper [DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models] is a concurrent work, which will not be published formally until Dec 2024. We were unable to use the insights from a paper published after ICLR deadline (and the citation of it is not required by ICLR policy).\\n\\n## W2 CM itself can be used as a good prior for solving inverse problems\\n[CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems] is a concurrent paper published in Oct 2024. So we could not compare with it before the ICLR deadline. Besides, CoSIGN requires training, while our approach, DPS and other methods that we compare to, are zero-shot. In our opinion, CoSIGN and our approach attempt to solve different problems under different setting, hence should not be compared directly. However, its insight or idea may be helpful in zero-shot setting and futher exploration is an interesting future direction.\\n\\n## W3 More DIS baselines are desired such as DDNM:\\nApproaches such as DDNM [Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model] require $f(.)$ to be linear. While in this paper, we consider more general and complex scenarios where $f(.)$ may be a neural network. To the best of our knowledge, DDNM cannot be used to solve non-linear problems. We have already included six strong baselines such as DPS, FreDOM, MPGD, UGD, LGD, STSL and clearly demonstrated the advantage of our method over the baselines when $f(.)$ is a neural network.\"}", "{\"title\": \"Reviewer's Response\", \"comment\": \"I thank the authors for their response. I have read all the reviews and responses. My concerns remain. Here are my comments.\\n\\n[Literature]\\n\\n- I would like to clarify that the example paper [1] I cited was intended to highlight that the literature review in this work is not comprehensive. While the authors focus on addressing the mean-based approximation challenge in diffusion models, they overlook other known limitations of \\u201cguidance-based\\u201d approaches, such as their inability to accurately perform posterior sampling. Paper [1] is one example discussing this issue, and I strongly suggest the authors, now that they have such knowledge, cite it in the related works or introduction to strengthen their paper.\\n\\n[Formulation]\\n\\n- My question regarding the VE formulation was to inquire why the authors include extensive mathematical details about this formulation, especially since it is not their contribution nor directly related to the core aspects of their CM framework.\\n\\n[On f(.) ]\\n\\n- I thank the authors for clarifying that CM is not trained when f(.) is present. However, on line 313, the authors mention that CM overfits to the operator f(.) . If CM is not trained conditionally, this statement seems misleading. Could the authors clarify their intended use of the term \\u201coverfit\\u201d in this context?\\n\\n[New Comment \\u2013 Framework Comparison]\\n\\n- The authors apply their framework solely to DPS. As noted by another reviewer, the improvement over DPS is marginal. Overall, the paper lacks sufficient comparisons. For instance, how does their framework perform when integrated into other methods provided in the tables or compared against newer diffusion-based inverse solvers such as PSLD [A], ReSample [B], or the general formulation of ReSample (DAPS [C])?\\n\\n[New Comment \\u2013 Applicability to x_0|t Guidance]\\n\\n- The paper evaluates CM only in conjunction with DPS, where the guidance gradient updates $x_t$ . I wonder how applicable the proposed framework is to cases where guidance updates $x_{0|t}$ . Many newer diffusion-based inverse solvers, including MPGD and ReSample, employ the latter approach and have demonstrated better performance. Could the authors explore or discuss the applicability of their framework in this context?\\n\\nOverall, as pointed out by another reviewer, the computational complexity of the framework needs a trained model, and the presented results do not justify the practical benefit of this approach.\\n\\n[A] Rout, L., Raoof, N., Daras, G., Caramanis, C., Dimakis, A., & Shakkottai, S. (2024). Solving linear inverse problems provably via posterior sampling with latent diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[B] Song, B., Kwon, S. M., Zhang, Z., Hu, X., Qu, Q., & Shen, L. Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency. In The Twelfth International Conference on Learning Representations.\\n\\n[C] Zhang, B., Chu, W., Berner, J., Meng, C., Anandkumar, A., & Song, Y. (2024). Improving diffusion inverse problem solving with decoupled noise annealing. arXiv preprint arXiv:2407.01521.\", \"typo\": \"Table 3: \\\"MDPG\\\" --> \\\"MPGD\\\"?\"}", "{\"summary\": \"This paper presents an interesting approach for posterior sampling using $p(x_0|x_t)$ being approximated via consistency model. Results show improvement over baselines such as DPS\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Easy to follow\\n2. The idea of using CM to approximate the PF-ODE solution is interesting\", \"weaknesses\": \"1. Using CM over the existing inverse problem solving adds a lot of computational burden, while the benefit is not clearly visible. Even though the $x_0|x_t$ may be off the manifold of ground truth image distribution, this does not imply a sacrifice in reconstruction quality as demonstrated in this work [1]. The quantitative results do not show a significant improvement over DPS.\\n2. CM itself can be used as a good prior for solving inverse problems: see this work [2]. This paper needs to compare with more recent works in inverse problem solving\\n3. More DIS baselines are desired such as DDNM [3]\\n\\n\\n\\n\\n[1]. Wang, Hengkang, et al. \\\"DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models.\\\" arXiv preprint arXiv:2405.16749 (2024). NeurIPS 2024\\n\\n[2]. Zhao, Jiankun, Bowen Song, and Liyue Shen. \\\"CoSIGN: Few-Step Guidance of ConSIstency Model to Solve General INverse Problems.\\\" ECCV 2024\\n\\n[3]. Wang, Yinhuai, Jiwen Yu, and Jian Zhang. \\\"Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model.\\\" The Eleventh International Conference on Learning Representations.\", \"questions\": \"1. More DIS baselines are desired such as CoSIGN [2], DDNM [3]\\n\\nI am open to change my rating if authors could address my concerns (how clearly a CM model could benefits with inverse problem solving comparing to strong baselines like DDNM)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes using consistency models (CMs) to solve inverse problems with diffusion models. In diffusion inverse problems, common approaches need to go from xt to x0 at every iteration to be able to compute a measurement-guided gradient with respect to the measurement y from x0. The majority uses the expectation from Tweedie's formula to compute x0 from tx, which may not result in a good example for complex multi-modal distribution. The paper proposes to replace this step with a CM. They show improvement upon prior work.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper considers an important problem: how diffusion models can be used to solve complex inverse problems, such as giving a segmented image to reproduce the underlying image. The paper acknowledges a known limitation on one assumption that prior work makes on the distribution of the data (which allows to use the Tweedie's formula), and proposes to address it by consistency models.\", \"weaknesses\": \"I find the contribution of the contribution incremental, and the presentation can be improved. The main weaknesses are as follows: the mathematical formulation is confusing and the approach to training the CM is not fully fair compared to the baselines.\\n\\n1. The paper trains a CM to be used for diffusion-based inverse solvers. It's not clear why such mathematical details (some are not fully proper) are included, which I do not think is the main contribution of the paper. It's unclear why in (1) the authors formulate the Markov chain following a VE formulation. Could the author explain this choice? The conventional Markov chain for diffusion models is the popular one based on VP, which has a different mean and variance such that the structure is destroyed, but the energy of the process remains the same.\\n\\n2. Solving inverse problems with diffusion mostly occurs in the regime where the diffusion model is trained unconditionally without the knowledge of the measurement operator f(.) (see the DPS used for vision problems such as deblurring). However, the paper in Section 3.4 discusses that the CM is overfitting to f(.) and proposes an approach to make the framework robust. So this brings the following: the CM going from xt to x0 seems to be trained with the knowledge of the measurement. If f(.) is involved during the training, then the trained framework is not general anymore (it's problem specific). Hence, the comparison of the proposed framework to models such as DPS, where the model is not trained based on the measurement operator, is not fair. Please provide more information and clarity if this is not the case. The fair comparison would be a scenario where both methods are trained a similar condition (e.g., not having the knowledge of the forward operator).\", \"here_are_some_questions_concerning_this\": [\"Is CM trained with knowledge of f(.), or if this is a misunderstanding?\", \"If the CM is trained with f(.), please explain and justify comparing it to methods like DPS that don't use this information?\"], \"more_comments_are_below\": [\"Lack of thorough literature\", \"The limitations of DIS are not fully explained in the intro. Indeed, the mean-based approximation is one challenge. A few others are related to whether methods such as DPS are doing posterior sampling or using the measurement to guide the process onto likely solutions (see [1]).\", \"The paper needs improvement in presentation. Here are a few examples\", \"While the notations such as Xt and X0 are known to the reader with knowledge of diffusion models, these are used in the abstract and intro without introducing them. Hence, I suggest re-writing the abstract without these notations and introducing the diffusion in the introduction before using x0, xt, etc.\", \"Consistency models are not defined and introduced in the intro, but the authors explain that they are used to improve performance. I suggest the authors to provide a brief definition or explanation of consistency models in the introduction.\", \"How the results are generated for Table 1; this appears abruptly without proper explanation. I suggest to include a brief explanation of the methodology used to generate the results in Table 1.\", \"Some terms within the manuscript are not precise and clear. Please provide clarifications.\", \"Section 1: with \\\"neural network operators\\\"? Does this refer to the measurement operator or the score function of the diffusion? I suggest to say \\\"measurement operators\\\" instead of \\\"operators\\\". Please clarify \\\"neural network operators\\\"?\", \"[1] Wu, Z., Sun, Y., Chen, Y., Zhang, B., Yue, Y., & Bouman, K. L. (2024). Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors. arXiv preprint arXiv:2405.18782.\"], \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks authors for their response. In my sense, the benefits of using an additional CM is still not justified both empirically and theoretically, I will maintain my original rating.\"}", "{\"comment\": \"Thank you for your response and addressing my comments. I will keep my rating.\"}", "{\"comment\": \"Thank you for your response and for addressing my concerns. I still think that the methodology has limited applicability; requiring the user to train both a diffusion model and a consistency model is a non-negligible constraint is not justifiable given that the proposed method does not offer significant performance gains, relative to the significant computational cost it introduces. I am keeping my initial score.\"}", "{\"summary\": \"The authors propose a new approach using CMs to generate realistic image samples in Diffusion Inverse Solvers, improving the application of complex, non-linear neural network operators like those in semantic segmentation, room layout estimation, image captioning, and image classification. Unlike traditional methods that produce low probability images, the incorporation of CMs is expected to maintain sample realism, resulting in more accurate posterior approximations, particularly when neural network-based operators are involved.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of using CMs in the process of inverse problems/posterior approximation seems novel and interesting.\\n\\nThe proposed method outperforms baseline approaches, particularly when compared to straightforward extension\", \"weaknesses\": \"Although being interesting, the novelty of the presented work is marginally above acceptation threshold since the only contribution seems to be the use of CMs in order to compute $x_0$ from $x_t$.\\n\\nThe manuscript could benefit from improved clarity and organization, as certain sections are challenging to follow. See further remarks.\", \"further_remarks\": \"In the presented algorithm, the updates are stated as $\\\\zeta_t \\\\Delta\\\\left(f(x_0 \\\\mid t), y\\\\right)$, where $\\\\Delta$ is defined as some distance. The update of $x_t$ however should be the gradient of that distance.\\n\\nIn order to enhance the readability of the work, the authors should think about introducing a clear distinction between the posterior $p(x_0|x_t)$ and the posterior $p(x|y)$, given y is the observation.\\n\\nThe abbreviation DPS (Diffusion posterior sampling) is used but never introduced.\", \"questions\": \"The presented manuscript often states likelihood terms as $p_\\\\theta(y|x)$. Please elaborate on why it contains theta, as the likelihood in general is not a learned function with the same parameters as the learned data distribution $p_\\\\theta(x)$.\\n\\nFurthermore, I would be interested as to which extend the PF-ODE solution differs from the MAP solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
4xWQS2z77v
Exploring The Loss Landscape Of Regularized Neural Networks Via Convex Duality
[ "Sungyoon Kim", "Aaron Mishkin", "Mert Pilanci" ]
We discuss several aspects of the loss landscape of regularized neural networks: the structure of stationary points, connectivity of optimal solutions, path with non-increasing loss to arbitrary global optimum, and the nonuniqueness of optimal solutions, by casting the problem into an equivalent convex problem and considering its dual. Starting from two-layer neural networks with scalar output, we first characterize the solution set of the convex problem using its dual and further characterize all stationary points. With the characterization, we show that the topology of the global optima goes through a phase transition as the width of the network changes, and construct counterexamples where the problem may have a continuum of optimal solutions. Finally, we show that the solution set characterization and connectivity results can be extended to different architectures, including two layer vector-valued neural networks and parallel three-layer neural networks.
[ "Convex duality", "Machine Learning Theory", "Loss Landscape", "Optimal Sets" ]
Accept (Oral)
https://openreview.net/pdf?id=4xWQS2z77v
https://openreview.net/forum?id=4xWQS2z77v
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xbRGiMTN7G", "xB20eD6HS2", "tQWcfy1Xzt", "reIQdykr4F", "onlMwiYN9h", "jx1I0WwjC0", "iWVnqXhJVF", "g3I55Rji6S", "fewshNicIW", "eghAeItYMz", "dcaGMiYNJU", "d1qqBpoq4e", "XTgLhVBEbM", "WJ5hRMRemJ", "Sk6WkghmgQ", "Shg8WqadXo", "SMO5EcTC1y", "PrQin5TgUr", "HhvVGoYFPR", "HQoVKFvKsC", "H7UGmfdtQR", "FZyygWqDvC", "F1EbEef3kf", "3yB5yBUuw8", "01kTFWv38E" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732216356207, 1732539670551, 1732499692563, 1732537125882, 1732216670208, 1732463032371, 1737523972062, 1732216069670, 1734342716204, 1732216905692, 1732216257573, 1732499734224, 1732528713452, 1732626111366, 1732542921432, 1732216541973, 1732567136637, 1730648358551, 1730511350125, 1730514843119, 1732499850378, 1730211753078, 1732216683020, 1730560760945, 1732499769185 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_iEp8" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_eG7f" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_ki4t" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Area_Chair_SqQk" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_Rcks" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_eG7f" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_UbdA" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_UbdA" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_eG7f" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_iEp8" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_ki4t" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ], [ "ICLR.cc/2025/Conference/Submission9266/Reviewer_Rcks" ], [ "ICLR.cc/2025/Conference/Submission9266/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your helpful comments on the paper. We have addressed them as the following:\\n\\n**1. Accessible explanations of theoretical results**\\n\\nWe agree that the paper would benefit from more intuition and have uploaded a major revision improving the presentation. Please see the general reponse. \\n\\n**2. Empirical validation of theoretical results with real-world datasets**\\n\\nUnfortunately, it is extremely hard to verify our findings with real-world datasets and large models. There are two reasons:\\n\\n1) Computing the critical widths $m^{\\\\ast}$, $M^{\\\\ast}$ is challenging. Though we present how we can compute the critical widths in practice (see Remark D.1. or answer 4 of the rebuttal), the algorithm is exponential in the number of data (the complexity is actually $O((n/d)^{dn})$, though we have an algorithm to upper bound $m^{\\\\ast}$ by first computing a solution to the convex training problem and pruning the solution(as introduced in [6]). A naive bound is $m^{\\\\ast} \\\\leq n$). As finding $m^{\\\\ast}$ is equivalent to finding sparsest points within a convex set, we believe that the problem could be NP-hard (e.g. [1] shows that finding a sparsest point in an ellipsoid is NP-Hard), and there might not be any efficient (or practical) algorithm to find these widths. \\n\\n2) Obtaining the optimal solution set is challenging for real-world dataset and large models. This is because even for two-layer networks, training neural networks to global optimality can be NP-Hard in certain scenarios ([2], [3]). Even in cases where we can train networks to global optimality, e.g. when the network is sufficiently wide enough so that it does not have spurious local minima, we could use the convex reformulations to obtain the solution set, but one problem is that the number of variables(which is in scale $O((n/d)^d)$) for the problem will be too large and finding the exact optimal solution set is impractical.\\n\\nNevertheless, we think that this weakness demonstrates the theoretical strength of our results, as our results show how theoretical analysis can lead to scientific statements concerning neural network training where empirical experiments could never grasp (due to computational complexity). \\n\\n**3. Notations to the main paper**\\n\\nWe summarized and moved the notation part to the main paper in the revised manuscript. Please refer to the general response.\\n\\n**4. A way to compute or estimate the critical widths $m^{\\\\ast}$, $M^{\\\\ast}$**\\n\\nThere exists an algorithm to exactly compute the critical widths $m^{\\\\ast}, M^{\\\\ast}$. The algorithm is discussed in Remark D.1, and we have added that there exists an algorithm that can compute $m^{\\\\ast}$ and one can refer to the appendix in the main paper.\\n\\n**5. How \\\"the descent path to arbitrary global minimum\\\" can motivate practical algorithms**\\n\\nThe construction of a path that connects global minima can motivate new algorithms that search the optimal solution space. More specifically, the \\\"meta-algorithm\\\" that was used to prove that $\\\\mathcal{P}^{*}(n+1)$ is connected could be directly used to construct a path between two points with low training loss. \\n\\nAlso, we have a practical implication of the Theorem, which says that the loss landscape is benign and even though the landscape is nonconvex, naive methods may find good parameters well.\\n\\n**6. Clarifications: $\\\\mathrm{Diag}(1(Xh \\\\geq 0))$, $\\\\mathcal{P}^{\\\\ast}_{\\\\nu^{\\\\ast}}$, Figure 2 bottom, the three interpolation problems**\\n\\nWe clarified these in the revised manuscript. Specifically, $\\\\mathrm{Diag}(1(Xh \\\\geq 0))$ is defined in the notations, sentences explaining $\\\\mathcal{P}^{\\\\ast}_{\\\\nu^{\\\\ast}}$ and the three interpolation problems were rewritten, and the bottom of Figure 2 is explained with labels.\\n\\n**References**\\n[1] Natarajan, Balas Kausik. \\\"Sparse approximate solutions to linear systems.\\\" SIAM journal on computing 24.2 (1995): 227-234.\\n\\n[2] Boob, Digvijay, Santanu S. Dey, and Guanghui Lan. \\\"Complexity of training ReLU neural network.\\\" Discrete Optimization 44 (2022): 100620.\\n\\n[3] Froese, Vincent, and Christoph Hertrich. \\\"Training neural networks is np-hard in fixed dimension.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Cover, Thomas M. \\\"Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition.\\\" IEEE transactions on electronic computers 3 (1965): 326-334.\\n\\n[5] Haeffele, Benjamin D., and Ren\\u00e9 Vidal. \\\"Global optimality in tensor factorization, deep learning, and beyond.\\\" arXiv preprint arXiv:1506.07540 (2015).\\n\\n[6] Mishkin, Aaron, and Mert Pilanci. \\\"Optimal sets and solution paths of ReLU networks.\\\" International Conference on Machine Learning. PMLR, 2023.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for the reply and the revision. My concerns are addressed. I have raised my score.\\n\\nBest regards,\\nReviewer iEp8\"}", "{\"comment\": \"Dear Reviewer Rcks,\\n\\nWe believe that we have addressed your concerns in our responses and the revised manuscript. As the deadline is approaching, we would like to hear your feedback so we can respond before the discussion period ends. Please feel free to raise questions if you have other concerns. Thank you very much for your support, we sincerely appreciate that!\\n\\nBest regards, Authors\"}", "{\"comment\": \"I would like to thank the authors for addressing my questions and concerns in detail, I believe now I understand the contributions of the papers better after reading the authors' response. I also checked the revised version of the paper, and it is much more readable comparing to the original version.\\n\\nOverall, I believe this paper has solid theoretical contributions on understanding the loss landscape of certain neural networks with L2-regularization, and would like to suggest a clear acceptance of this paper to ICLR2025. \\n\\nFinally, I have one last question: I think I'm able to understand the technical results in this paper, and it is based on a line of works [1-3] that studies the convex formulation of neural networks as mentioned in related works part. To be honest, I'm not familiar with the technical results this line of work, so I can not accurately tell how \\\"novel\\\" the technical parts is. Thus, could the authors elaborate a bit more on the connections and differences between this paper and the line of works[1-3] (or maybe other works follows this line)? \\n\\n[1] Pilanci, Mert, and Tolga Ergen. \\\"Neural networks are convex regularizers: Exact polynomial-time convex optimization formulations for two-layer networks.\\\" \\n[2] Ergen, Tolga, and Mert Pilanci. \\\"The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models.\\\" \\n[3] Mishkin, Aaron, and Mert Pilanci. \\\"Optimal sets and solution paths of ReLU networks.\\\"\\n\\nThank you and best regards,\\nReviewer eG7f\"}", "{\"comment\": \"Thank you for your helpful comments on the paper. We have addressed them as the following:\\n\\n**1. Significance of the result in 3.3**\\n\\nThere are two consequences in the result of Theorem 3.3, both are of theoretical interest. The consequences are:\\n1) We show that free skip connections may affect the qualitative behavior of the optimal solutions(e.g. uniqueness and sparsity)\\n2) The nonunique min-norm interpolators imply that we have multiple interpolators with the same norm but with different generalization performances, and their behavior may diverge significantly outside the training set. \\n\\nThe first consequence is that in the literature, free skip connections have been imposed in the problem for a clearer derivation of optimal solutions ([1], [2], [3]). It has been believed in [2], [3] that not having free skip connections does not affect the optimal solution that much in practice. For example, in pg 2 of [3], it is stated that\\n\\n\\\"This skip connection avoids some complications and allows better characterizing \\u02c6fS but does not meaningfully change the behavior [3]\\\".\\n\\nOur example shows that this statement is false, and not having free skip connections can make the problem have multiple min-norm interpolators and make the solution not sparse (i.e. the optimal interpolator may not be the sparsest). Thus, free skip connection may change the qualitative behavior of the solution (uniqueness & sparsity).\\n\\nAnother significance is that these different minimum-norm interpolators, though having the same norm, have different generalization performances (because they behave differently outside the training set). Generalization bounds are generally associated with parameter norms, but here we see that interpolators with the same norm have are drastically different: as $x \\\\rightarrow \\\\infty$, the two function differences diverge, and have population loss (which is a qualitative difference). \\n\\nMoreover, the example demonstrates how convex duality can construct structured counterexamples motivated from the hidden convex geometry. The intuition for such construction is making the set $\\\\mathcal{Q}_{X} = \\\\{ReLU(Xu)\\\\ | \\\\ \\\\Vert u \\\\rVert_2 \\\\leq 1\\\\} $ meet with its supporting hyperplane with $n+1$ points. \\n\\nAt last, we would like to emphasize that while the setting $d = 1$ could seem restrictive, understanding minimum-norm interpolators is an important problem that has been extensively discussed [1]~[7]. We believe it is an interesting problem whether we can construct counterexamples where the minimum-norm solutions differ the most, but we would like to leave it for future work.\\n\\n**2. Significance of the result in Theorem 4**\\nThe significance of Theorem 4 is not within the proof strategy, but the fact that we can describe a certain subset of the optimal set of a deep neural network in a precise manner. It is challenging to describe the optimal set of deep networks, because of its nonconvexity without a good theoretical structure to work with. Nevertheless, due to page limitations, we moved the Theorem to the appendix and mention such characterization is possible in the main paper. \\n\\n**3. Universality of the techniques**\\nFor the possible extensions to different activation functions, please see the general response.\\n\\nAlso, we have extensions of the result to parellel three-layer neural networks (Theorem 3 of the main paper). Also, there are discussions of convex reformulations of parallel neural networks of depth L [11], and we believe that at least some of the results (e.g. the polytope characterization) could be extended to these settings. \\n\\nOverall, it is not true that these results could be extended to networks with arbitrary activation and architecture, but are not limited to only two-layer ReLU neural networks.\"}", "{\"comment\": \"The authors have addressed every concern raised in my review, therefore I suggest this paper to be accepted at ICLR25.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all reviewers for their time and thoughtful reviews. The manuscript has improved a lot thanks to the sincere reviews.\\n\\n**Presentation**\\n\\nMany reviewers suggested improving the overall presentation of the paper, making it more accessible and readible. We have a revised manuscript that we believe that we have improved the presentation by:\\n\\n1) Clearly defining the problem of interest and notations in the main paper\\n2) Making the main results (Theorem 1, 2, Proposition 3) more accessible by removing mathematically heavy notations and explaining the main concepts verbally, deferring a rigorous treatment to the appendix\\n3) Clarifying any undefined variables. \\n4) Moving \\\"most\\\" relevant results from the appendix to the main paper. If the result was mathematically very complicated and is a direct extension/application of existing results (e.g. Prop F.5, Thm F.2, Prop F.1, F.2, F.3), we left the results in the appendix. \\n5) We tried to clarify every unclear sentences / sections / figures (especially the second part of Figure 2) that the reviewers pointed out.\\n\\nOur submission is updated with the revised version. \\n\\n**Different Activations**\\n\\nAnother common question was the scope of the analysis, in particular for different activation functions.\", \"we_would_like_to_note_that_convex_reformulations_exist_for_many_different_activation_functions\": \"for example they exist for piecewise linear [1], polynomial activations [2], and threshold activations [3]. Our result are easily extended to piecewise linear activations, and similar results could be developed for different activations. Our results may not be trivially extendible to other activations such as threshold or polynomial, as their convex reformulations of have a different form compared to ReLU.\\n\\nWe sincerely appreciate the reviewers' insightful questions and thoughtful feedback. We look forward to a meaningful discussion during the discussion phase!\\n\\n**References**\\n\\n[1] Ergen, Tolga, and Mert Pilanci. \\\"The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models.\\\" arXiv preprint arXiv:2312.12657 (2023).\\n\\n[2] Bartan, Burak, and Mert Pilanci. \\\"Neural spectrahedra and semidefinite lifts: Global convex optimization of polynomial activation neural networks in fully polynomial-time.\\\" arXiv preprint arXiv:2101.02429 (2021).\\n\\n[3] Ergen, Tolga, et al. \\\"Globally optimal training of neural networks with threshold activation functions.\\\" arXiv preprint arXiv:2303.03382 (2023).\"}", "{\"metareview\": \"This paper studies the loss landscape of regularized ReLU networks, focusing on the structure of stationary points, the connectivity of optimal solutions and the non uniqueness of optimal solutions. The authors start by considering a two-layer network with scalar output, and characterize the connectivity of the solution for different sizes of the network. Then, they consider extensions to minimal norm interpolation, vector-valued networks, and deep neural networks.\\n\\nThe reviewers found the results insightful (especially the 'staircase of connectivity' phenomenon) and well-integrated in the related literature. The proposed framework based on convex duality is rather general: it does not require large over-parameterization or specific scalings, which is a strong point. At the technical level, the origin of this framework is to be found in earlier work by Pilanci et al., which limits a bit the technical novelty. However, significant effort is required to tackle the settings considered here. All reviewers are positive about the paper, and so am I. Thus, I recommend it to be accepted.\", \"additional_comments_on_reviewer_discussion\": \"All the points raised by the reviewers have been resolved during the rebuttal and discussion, so the opinion on the paper is uniformly positive.\"}", "{\"comment\": \"Thank you for your helpful comments on the paper. We have addressed them as the following:\\n\\n**1. Clarifications: Unclear sentences, math error (triangular inequality reversed), some undefined variables (e.g. P), results referring to the appendix, missing reference**\\n\\nPlease see the general response. Some fixes that were made were:\\n\\n 1) Clearer statement on $\\\\mathcal{P}^{\\\\ast}_{\\\\nu^{\\\\ast}}$ and the intuition of optimal polytope\\n 2) More accessible statement of main results\\n 3) Bottom of Figure 2 explained\\n 4) Clarification of the three interpolation problems solved (Section 3.3)\\n\\nWe have fixed the math error and added notations to the main paper, defining all undefined variables there. Also, we tried to move most results from the appendix (as much as the page limit allows). At last, thank you for pointing out an important reference that was missing. We added the reference in the related works section.\"}", "{\"comment\": \"We are very glad to see that the reviewer appreciated our work!\\n\\n**1. A deeper discussion on the topological implications of the results**\\n\\nThe main topological implication of our theorem is that as the width increases, the topology of the solution set is complicated at first, but becomes simpler later on.\\n\\nAn important implication of our theorems is that all sublevel sets are connected when the width is sufficiently large ($n+1$) Hence, our result on connectivity is a non-asymptotic improvement of [2], enabling us to discuss the topology of sublevel sets not only in the case $m \\\\rightarrow \\\\infty$.\\n\\nAnother interesting direction to study the topology of the optimal set of neural networks is discussing its homology (i.e. we want to find out if the optimal set looks like a simple sphere, or something like a torus with a hole, etc.). We believe that our framework could be used to cound the number of holes that an optimal set of a neural network has, once we understand the homology of the cadinality-constrained set $\\\\mathcal{P}^{*}(m)$ in the paper. We believe such characterization could make our understanding of neural network loss landscapes more complete, by excluding certain shapes as impossible or finding out that the \\\"bottom of the loss landscape\\\" looks like a certain shape.\\n\\n[1] Haeffele, Benjamin D., and Ren\\u00e9 Vidal. \\\"Global optimality in tensor factorization, deep learning, and beyond.\\\" arXiv preprint arXiv:1506.07540 (2015).\\n\\n[2] Freeman, C. Daniel, and Joan Bruna. \\\"Topology and geometry of half-rectified network optimization.\\\" arXiv preprint arXiv:1611.01540 (2016).\"}", "{\"comment\": \"Dear Reviewer iEp8,\\n\\nWe believe that we have addressed your concerns in our responses and the revised manuscript. As the deadline is approaching, we would like to hear your feedback so we can respond before the discussion period ends. Please feel free to raise questions if you have other concerns. Thank you very much for your support, we sincerely appreciate that!\\n\\nBest regards, Authors\"}", "{\"comment\": \"I would like to thank the authors for addressing my concerns and questions, particularly also explaining why it is challenging to empirically validate the staircase of connectivity.\\n\\nI think that the revised manuscript is a good theoretical paper and would suggest the acceptance of this paper to ICLR25.\"}", "{\"comment\": \"Thank you for the detailed explanation of the novelty of the paper. I believe I have a better understanding on the overall contributions of this paper after the rebuttal session and thus I decide to raise my score.\\n\\nThank you and best regards, Reviewer eG7f\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the reply! My concerns are addressed.\"}", "{\"comment\": \"Thank you for your helpful comments on the paper. We have addressed them as the following:\\n\\n**1. Different activation functions**\\n\\nFor the possible extensions to different activation functions, please see the general response.\\n\\nThough this is the case, this paper only discusses neural networks with ReLU activation - we now explicitly state that our scope is networks with ReLU activation in the Introduction and problem settings.\\n\\n**2. Presentation of Section 2**\\nThank you for pointing out the insufficient explanation on convex reformulations in Section 2. In the revised manuscript we added more explanation to Section 2. Specifically, we\\n\\n1) clearly defined what $D_i, X, y$ and the shape of each matrix is in the problem setting and notations\\n2) We specifically stated that when $m \\\\geq m^{\\\\ast}$ for some critical threshold $m^{\\\\ast} \\\\leq n$, we have a convex reformulation.\\n3) We gave an exact notion what \\\"equivalent convex problem\\\" means, and gave how the two solutions are related.\\n4) We specifically stated that when $m \\\\geq m^{\\\\ast}$, strong duality holds and the dual problem also has the same optimal value as the primal problem.\\n\\n**3. Clarifications: Red points in Figure 1, the lower half of Figure 2**\\n\\nFirst, the red points in Figure 1 are actual optimal points in parameter space. Note that all permutations of each red point will be optimal: you could understand Figure 1 as all those permutations each plotted as red points, and as permutations are finite, we have finite number of red points.\\n\\nWe clarified the lower half of Figure 2 in the revised manuscript.\"}", "{\"comment\": \"I can elaborate more on the technical novelty of this work.\\n\\n[1] first introduces that neural networks have convex reformulations, i.e. we have equivalent convex problems for two-layer ReLU neural networks with regularization, and there exists a solution mapping between the two. This is not our technical novelty, as it was done earlier. [3] is the main paper that we are motivated from: it describes the \\\"solution set\\\" (which is the set of optimal parameters of the neural network) of the neural network using this formulation, and characterizes as a polytope. [2] discusses that stationary points correspond to subsampled convex problems, which was discussed in different literature as well.\", \"our_main_technical_novelties_are\": \"1) We associate the characterization in [3] with the dual optimum, leading to a different proof and a clear geometric intuition of the optimal solution set. \\n2) The most novel proof is Lemma D.1. and Theorem D.1. in the proof. The proof is based on a nontrivial construction of a path from two different points in $\\\\mathcal{P}^{*}(n+1)$ (the cardinality-constrained set), and we made a lot of effort to make this work. Such proof was never given in the above literature on convex networks, or in proving connectivity.\\n3) Another technical novelty is section 3.3. The existence of non-unique minimum-norm interpolators has been discussed, but no work has discussed it using a geometric structure involving convex duality, that could further be applied to different problems.\\n\\nAlso, we would like to highlight a conceptual novelty, which is that there has been discussions of the irreducible solution set and pruning a solution, but the existence of exact thresholds of topological change and their relation with the convex problem has never been discussed, both in the convex networks literature and nn theory literature (as far as I am concerned). \\n\\nThank you very much for your time, constructive feedback and suggesting a clear acceptance.\"}", "{\"summary\": \"The authors present a deep and novel analysis of the loss landscape and a solution in the context of regularized neural networks. They also show that the topology of the global optima undergoes a phase transition as the width of the network changes.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written, easy to follow, and its results are clear and novel. The theoretical results stand out for their depth and clarity, as do the empirical results. The support of images is quite helpful when reading through some mathematical arguments or proofs of theoretical results.\", \"weaknesses\": \"Perhaps a deeper discussion on the topological implications of their results would be beneficial.\", \"questions\": \"No questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the loss landscape of ReLU networks with $L_2$-regularization. The authors first study the canonical case of a two-layer network with scalar output, and characterize the connectivity of the solution for different number of neurons. Then, the authors extend the results to a more general class of problems, including: minimal norm interpolation, vector-valued ReLU network, and parallel deep neural network.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper develops a general framework to characterize the global optimum of the regularized ReLU network via convex duality. From my understanding, the key contribution of this convex duality framework in Theorem 1 is that it allows one to characterize the \\\"direction\\\" of the weights separately in the regularized case, which is then useful for characterizing the global optimum. I believe this contribution is novel and solid.\\n\\n2. I think the framework of characterizing the global optimal is quite general even though it is restricted to ReLU network. In particular, it do not require large over-parameterization, special scaling, or special data distributions. Thus, I believe the results can be applied to other more specific settings and is potentially useful for characterizing other properties besides the connectivity of the solutions.\", \"weaknesses\": [\"1.Although I believe this paper has a solid contribution, I found there's a few part I don't understand the significance:\", \"I think I understand the contributions in section 3.1 and 3.2, however, I'm not sure about In section 3.3, the authors showed that for a class of data set with dimension $=1$ that satisfies certain conditions, if the network do not have skip connection, then there are infinitely many minimal norm interpolators (which is a connected set ). I'm not sure the significance of these results, since (1) it is for a special construction of dataset. (2) it might be that those infinitely many minimal norm interpolators behave qualitatively almost the same, for example, the radius of the solution set is small. Could you discuss more on the significance of the results?\", \"In section 4, I understand the contribution of generalizing it to a vector-valued function. However, I'm not sure the significance of the results in Theorem 4. Since anyway you fixed all the other layers but only keep two consecutive layers, and technically I didn't see any difference from a two-layer network. Could you discuss more on the significance of the results?\", \"2. One main issue of the paper is the writing, especially the main part of the paper. I check the appendices, and it is much more readable. So I suggest the authors consider rearranging the content. To name a few issues\\\\typos that confuse me when reading the main part:\", \"Line 215: and 216, what is the definition of $\\\\mathcal{S}_i$?\", \"Line 223: what the definition of \\\"optimal model fit\\\", what is $u_i^*, v_i^*$, and why it is unique?\", \"Line 232: the triangle inequality is reversed. Also could you be more specific about the discussion between Liine 229-232?\", \"The statement of Proposition 1: First, you use $v_{i,1}$ to denote the first entry of a vector, could you specify this? Also, you define $s_k = \\\\sum_{i=1}^k v_{n-i+1},$ but also require $||s_k|| =1, s_n = [0,1]^\\\\top$, could you discuss the existence of such construction?\", \"In equation (7), could you specify the dimension of the variables?\"], \"questions\": \"1. A general question is about the scope of the techniques in this paper. It seems that the techniques only apply to two-layer ReLU networks, since the problem can be equivalently written as a convex problem with regularization. It is not applicable to other activation functions and seems hard to generalize to multi-layer cases. Thus, could you elaborate more on the universality of the techniques?\\n\\n2. The results in this paper require the number of neurons $m \\\\geq m_*$. As far as I understand, $m_*$ is the minimal number of neurons needed to achieve the optimal model. I'm wondering what would happen if $m<m_*$? Also, in general, what is the scaling of $m_*$ depending on $n,d$? \\n\\n3. The results in Theorem 2 consider the connectivity of the optimal solution set, which is equivalent to the connectivity of a path with $0$ perturbation. What about the case that allow $\\\\epsilon$-pertubation along the path? Is the techniques still applicable?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"In this paper, the authors apply convex duality for two-layer ReLU networks to study mode connectivity and unique minimal-norm interpolator problems, while also working to generalize this framework. Specifically, the authors have:\", \"identified the staircase of connectivity that describes how connectivity evolves with width;\", \"constructed none-unique minimal-norm interpolator by breaking the uniqueness conditions;\", \"generalized the optimal polytope to the general cone-constrained group LASSO problem and applied it to more complicated architectures.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors apply the new technique of convex duality to problems of connectivity and minimal-norm interpolation, which have been studied previously using other methods. This approach yields both generalizations of existing results and new insights into these problems. Overall, I believe this paper is a strong demonstration of how convex duality can be leveraged in the theoretical study of machine learning. The abstract concepts are clarified through figures and examples.\", \"weaknesses\": [\"I have some concerns with the presentation of this work. Specifically:\", \"If I understand correctly, the convex duality only applies to ReLU networks. This is not emphasized.\", \"I found Section 2 difficult to follow without prior knowledge of Pilanci & Ergen (2020). The relations between (1), (2), and (3) are mentioned but not explained (When do they have the same loss value? How do the solutions relate to each other?) Dimensions of $X$ and $y$ are not mentioned. $D_i$ is not explained.\", \"In Figure 1, is each red point truly a unique solution, or does it represent solutions equivalent under permutation (p-unique)? If they are p-unique solutions, readers may get the wrong impression.\", \"The lower half of Figure 2 is not explained.\"], \"questions\": \"I wonder if the authors have any comment regarding the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer UbdA,\\n\\nThank you for appreciating our work. As the deadline is approaching, we would like to hear your feedback about the revised manuscript so we can respond before the discussion period ends. Please don't hesitate to raise questions if you have other concerns. Thank you very much for your support, we sincerely appreciate that!\\n\\nBest regards, Authors\"}", "{\"summary\": \"This manuscript proposes a characterization of the topology of the global optima in the loss landscape, focusing on regularized two-layers neural networks with free skip-connections with a partial extension to deep neural networks. The authors provide a characterization of the optimal set in terms of the width of the hidden layer, which determines a so-called \\\"staircase of connectivity\\\" when such a width occurs in critical values and phase transitions. The authors study the uniqueness of the minimum-norm interpolator, highlighting necessary guarantees (such as the free ski connections, bias in the training problem and unidmiensional data). An experimental study integrates the theoretical findings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper has original content, and bridges together several concepts proposed in the literature on the topic. The method of analysis is rigorous and it gives a solid contribution to the field.\", \"weaknesses\": \"The manuscript presents some unclear sentences (e.g. line 198-199). There is a clear math error at line 232 (the triangle inequality holds with the reverse inequality; to be candid, I am quite sure it is a typo) and some symbols are not defined at all, not even in the Appendix (e.g. the symbol P, that occurs very often through the entire manuscript).\\nThere are many references to results listed in the Appendix; if relevant, I think it might be better to put them in the main manuscript.\\nAn important reference to the characterization of loss landscapes over neural networks with regularization terms and/or skip connections is missing, also because it gives a theoretical hint on the low importance of skip connections [1].\\n\\n[1] Bucarelli, M. S., D\\u2019Inverno, G. A., Bianchini, M., Scarselli, F., & Silvestri, F. (2024). A topological description of loss surfaces based on Betti Numbers. Neural Networks, 106465.\", \"questions\": \"Can the authors proofread again the manuscript to eliminate typos, unclear sentences and missing notation?\\nCan they make the presentation of the result more readable, including relevant results from the Appendix?\\nCan they integrate the relevant existing literature in the Related Work section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**4. What happens if $m < m^{\\\\ast}$? What is the scale of $m^{\\\\ast}$?**\\n\\nWhen $m < m^{\\\\ast}$, we cannot apply the methods discussed in the paper directly because there is no convex reformulation. We \\\"could\\\" start with a cardinality-constrained optimization problem, \\n$$\\n\\\\min_{(u_i, v_i)_{i=1}^{P}} L(\\\\sum_i D_iX(u_i - v_i), y) + \\\\beta \\\\sum_i \\\\lVert u_i \\\\rVert_2 + \\\\lVert v_i \\\\rVert_2 \\\\quad \\\\mathrm{subject} \\\\ \\\\ \\\\mathrm{to} \\\\sum_i 1(u_i \\\\neq 0) + 1(v_i \\\\neq 0) \\\\leq m.\\n$$\\nbut the critical issue is that the above problem is not convex, so it is not clear yet how to deal the case $m < m^{\\\\ast}$. \\n\\nPrecisely speaking, the scale of $m^{\\\\ast}$ does not depend on $n, d$, but depend on the geometry of the dataset $(X, y)$. Specifically, consider $\\\\mathcal{Q}_X = \\\\{ReLU(Xu) \\\\ | \\\\ \\\\lVert u \\\\rVert_2 \\\\leq 1\\\\}$. Suppose we are solving the min-norm interpolation problem. Scale $y$ so that $ty \\\\in \\\\mathrm{Conv}(\\\\mathcal{Q}_X \\\\cup -\\\\mathcal{Q}_X)$. The minimal number of points that are needed to express $ty$ as a sum of vectors from $\\\\mathcal{Q}_X \\\\cup -\\\\mathcal{Q}_X$ becomes $m^{\\\\ast}$. Hence, even for large $n, d$, depending on the dataset $m^{\\\\ast}$ could be very small. It is an interesting open problem how $m^{\\\\ast}$ would behave for random $(X, y)$, but unfortunately, I am unsure how things look like currently.\\n\\n**5. Can we still apply this technique for $\\\\epsilon$ perturbations?**\\nOne thing we could do is using a convex set\\n$(u_i, v_i)_{i=1}^{P}$ where $L(\\\\sum_i D_iX(u_i - v_i), y) + \\\\beta \\\\sum_i \\\\lVert u_i \\\\rVert_2 + \\\\lVert v_i \\\\rVert_2 \\\\leq p^{\\\\ast} + \\\\epsilon$\\nwhere $p^{\\\\ast}$ denotes the optimal value of the convex program. Here we will not have the optimal polytope characterization, so the connectivity behavior could differ and we may have that the set becomes connected even for lesser $m$. We think understanding how the connectivity behavior of the set changes as $\\\\epsilon$ changes and understanding the (possible) phase transitional behavior is a very interesting direction of study, and also fits well with the initial description of mode connectivity.\\n\\n**6. Clarifications: $\\\\mathcal{S}i$, optimal model fit, reversed triangular inequality, $\\\\mathcal{P}^{\\\\ast}_{\\\\nu^{\\\\ast}}$, the statement of Proposition 1, Dimensions in equation (7)**\\n\\nWe no longer have complicated statements in the main paper. The optimal model fit and its uniqueness is stated before it is used. We fixed the triangular inequality and added more explanations for unclear statements, and specified the dimensions in equation (7). Please refer to the general response.\\n\\n**References**\\n[1] Hanin, Boris. \\\"Ridgeless Interpolation with Shallow ReLU Networks in $1 D $ is Nearest Neighbor Curvature Extrapolation and Provably Generalizes on Lipschitz Functions.\\\" arXiv preprint arXiv:2109.12960 (2021).\\n\\n[2] Boursier, Etienne, and Nicolas Flammarion. \\\"Penalising the biases in norm regularisation enforces sparsity.\\\" Advances in Neural Information Processing Systems 36 (2023): 57795-57824.\\n\\n[3] Joshi, Nirmit, Gal Vardi, and Nathan Srebro. \\\"Noisy interpolation learning with shallow univariate relu networks.\\\" arXiv preprint arXiv:2307.15396 (2023).\\n\\n[4] Ergen, Tolga, and Mert Pilanci. \\\"Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\\n\\n[5] Ongie, Greg, et al. \\\"A function space view of bounded norm infinite width relu nets: The multivariate case.\\\" arXiv preprint arXiv:1910.01635 (2019).\\n\\n[6] Savarese, Pedro, et al. \\\"How do infinite width bounded norm networks look in function space?.\\\" Conference on Learning Theory. PMLR, 2019.\\n\\n[7] Parhi, Rahul, and Robert D. Nowak. \\\"Banach space representer theorems for neural networks and ridge splines.\\\" Journal of Machine Learning Research 22.43 (2021): 1-40.\\n\\n[8] Ergen, Tolga, and Mert Pilanci. \\\"The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models.\\\" arXiv preprint arXiv:2312.12657 (2023).\\n\\n[9] Bartan, Burak, and Mert Pilanci. \\\"Neural spectrahedra and semidefinite lifts: Global convex optimization of polynomial activation neural networks in fully polynomial-time.\\\" arXiv preprint arXiv:2101.02429 (2021).\\n\\n[10] Ergen, Tolga, et al. \\\"Globally optimal training of neural networks with threshold activation functions.\\\" arXiv preprint arXiv:2303.03382 (2023).\\n\\n[11] Ergen, Tolga, and Mert Pilanci. \\\"Path regularization: A convexity and sparsity inducing regularization for parallel relu networks.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[12] Garipov, Timur, et al. \\\"Loss surfaces, mode connectivity, and fast ensembling of dnns.\\\" Advances in neural information processing systems 31 (2018).\"}", "{\"summary\": \"In this work the authors analyze multiple aspects of the loss landscape of regularized two-layer neural networks with scalar output, including the structure of stationary points, the connectivity of optimal solutions and the non uniqueness of optimal solutions. The main proof strategy is to translate the problem into an equivalent convex problem and characterize its solution set through its dual form.\\nThe authors show that the topology of the global optima goes through a phase transition as a function of the hidden layer width, which they term the staircase of connectivity. \\nThis result is extended later to networks with vector-valued outputs, and parallel deep networks of depth 3.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I found the \\\"staircase of connectivity\\\" very insightful, particularly how the connectivity properties of the optimal solutions are connected to critical widths $m^*$ and $M^*$. This finding explains how increasing the number of neurons affects the connectedness of optimal sets, and makes the observation of mode connectivity [Garipov et al. 2018] more precise.\", \"The paper generalizes its findings also to vector-valued networks and deep networks with skip connection, which provides a broader framework that can be applied across different architectures.\"], \"weaknesses\": [\"I found the theoretical results, for instance on the staircase of connectivity, hard to interpret in practice and would benefit from more accessible explanations. While the toy example in Example 1 illustrates the concept, the absence of labels in Figure 2, as well as the notation-heavy formulation makes it difficult for readers to grasp the results intuitively.\", \"Although the toy examples are helpful, the paper lacks actual empirical validation of the theoretic results. I think it would add credibility to this work, if the staircase of connectivity concept would also be tested on actual neural network architectures trained on real data. It would be interesting to see how these results scale with different data distributions and larger models.\", \"Overall I found the work quite difficult to read due to the dense mathematical formalism. I also feel like the section on notations should not be in the appendix, but should - at least in a shortened version - be included in the main paper.\"], \"questions\": \"1. Is there a way to bound or estimate the critical widths $m^*$ and $M^*$ in practice, for instance on real datasets?\\n2. In line 186: What does $h$ refer to in $\\\\text{diag}[1 (Xh \\\\geq 0)]$ ?\\n3. It is not very clear to me what lines 225-226 mean. Could you perhaps rephrase it? (That $\\\\mathcal{P}^*_{\\\\nu^*}$ does depend on $\\\\nu^*$, but that the specific choice of it does not matter.)\\n4. Figure 2 bottom: The axis labels are missing and it is not very clear to me what the red and blue lines are supposed to represent.\\n5. In line 351 the author mention three interpolation problems of interest, but only discuss one problem on the minimum-norm interpolation problem. What are the other two interpolation problems and can you also extend your results to these problems?\\n6. The paper describes a path of nonincreasing loss that connects local to global minima. Could this insight be incorporated into practical training algorithms, such as initializing weights or guiding optimizers in large-scale training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer eG7f,\\n\\nWe believe that we have addressed your concerns in our responses and the revised manuscript. As the deadline is approaching, we would like to hear your feedback so we can respond before the discussion period ends. Please feel free to raise questions if you have other concerns. Thank you very much for your support, we sincerely appreciate that!\\n\\nBest regards, Authors\"}" ] }
4xEACJ2fFn
Is the sparsity of high dimensional spaces the reason why VAEs are poor generative models?
[ "Alejandro Ascárate", "Leo Lebrat", "Rodrigo Santa Cruz", "Clinton Fookes", "Olivier Salvado" ]
Variational autoencoders (VAE) encode data into lower dimension latent vectors before decoding those vectors back to data. Once trained, decoding a random latent vector usually does not produce meaningful data, at least when the latent space has more than a dozen dimensions. In this paper, we investigate this issue drawing insight from high dimensional physical systems such as spin-glasses, which exhibit a phase transition from a high entropy random configuration to a lower energy and more organised state when cooled quickly in the presence of a magnetic field. The latent of a standard VAE is by definition close to a uniform distribution on a hypersphere, and thus similar to the high entropy spin-glass state. We propose to formulate the latent variables of a VAE using hyperspherical coordinates, which allows to compress the latent vectors towards an island on the hypersphere, thereby reducing the latent sparsity, analogous to a quenched spin-glass. We show that this is feasible with modest computational increase and that it improves the generation ability of the VAE.
[ "variational autoencoder", "generative model", "high dimensional statistics", "spin glass", "latent space", "hyperspherical coordinates" ]
Reject
https://openreview.net/pdf?id=4xEACJ2fFn
https://openreview.net/forum?id=4xEACJ2fFn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wsNbiKQ6Ex", "tS3lgfldNZ", "t3QrM3DJpu", "rdMZwrR4Sf", "oFZvE2XjIL", "oDWEnUSfvs", "o1UXFHiHWc", "nwTLNrT0XG", "nOZlMmVbBC", "mAFTws9jqe", "l1DNB9Cbs5", "kl3Mlk6kwZ", "kRcxr91dX8", "fnV0UDkQ8b", "fdtFsT0meY", "e4wP1Bxy5Y", "aXiSHVOpPl", "aH8pZiEy4I", "YdYxlif3A0", "W1c4oIUAB3", "V4QscB9lgl", "UD526C5OdO", "U4qnoadnWH", "ReqORbcuw9", "OZywK2yCd7", "OWMKEmnaB6", "Nset1FKKOI", "NPFYJog97L", "KQXBwP6jUw", "K8nU2HPLWj", "GumJksidKk", "FXLcOzaiAw", "Eg2hjRsFxr", "DPQDwHjone", "BqMB5co3dg", "BHZIuoEh81", "9b33j2DKDj", "9Ot7go7JEg", "7vks58JDbN", "4upYK49sgf", "3bOWnKtsRv", "3RkKFx859I" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732404939206, 1737523965002, 1732807037750, 1732620820406, 1732777843413, 1732967444202, 1732581880052, 1732537348266, 1732297907866, 1730431637877, 1732528987056, 1732064402453, 1732161463627, 1732161361105, 1734693865315, 1732539791357, 1732450576223, 1732776746773, 1733200030814, 1730310780824, 1732090400229, 1732502734078, 1732611297214, 1730388271971, 1732502847940, 1732861834984, 1732803304631, 1732802996367, 1730459957644, 1732525650061, 1732777679937, 1730474280214, 1732064922129, 1732502389110, 1732503264188, 1732090656729, 1732626033232, 1732622050826, 1732068235850, 1732064149865, 1732381278416, 1732779677629 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Dgy1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Ww1f" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_TsTD" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Area_Chair_kEvt" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_uan5" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_TsTD" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Ww1f" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Ww1f" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_uan5" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_8unt" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Dgy1" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Ww1f" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ], [ "ICLR.cc/2025/Conference/Submission9160/Reviewer_Ww1f" ], [ "ICLR.cc/2025/Conference/Submission9160/Authors" ] ], "structured_content_str": [ "{\"comment\": \"The additional discussion of the posterior collapse is helpful. I still believe that its avoidance is the main driver of any improvement of the generative performance of the VAE. The fact that you achieve this by a judicious choice of the prior in hyperspherical coordinates and a square-root type annealing schedule during training remains interesting.\\n\\nHowever, no guarantees on the achieved performance improvement can be given at this stage and one wonders if simpler methods like reducing the variance of the prior (or a lower dimensional latent space in the first place) would not yield similar results. \\n\\nI believe the paper's strongest point is to highlight that some regularization constraints act as an external magnetic field (l. 301) and training needs to quickly reach a region in latent space consistent with these constraints followed by a relaxation (\\\"quenching and annealing\\\"). This seems like a more defensible claim (rather than \\\"sparsity implies poor generation\\\") given the material presented in the paper. As such it would already provide valuable insights into the training of VAEs with regularization constraints.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the prompt response! I hadn\\u2019t noticed the Supplementary Material earlier.\\n\\nI\\u2019d like to thank the authors for their clarifications and the additional experimental results addressing my concerns. I recognize the effort and dedication put into the rebuttal. As a result, I am raising my score to a 6.\\n\\nThat said, I believe the presentation of the paper could still be improved, which is also an important aspect. Specifically, the figures should clearly indicate that the metric being shown is the self-FID. Additionally, the references should be corrected (e.g., \\\"J Tomczak. Priors (blogpost). URL https://jmtomczak.github.io/blog/7/7_priors.html,\\\" where the date of access is missing, or published conference articles cited as their ArXiv versions). Finally, the main text should point readers to the additional experimental results provided in the Appendix (e.g., CelebA results).\"}", "{\"comment\": \"Thank you for your sustained interest in the discussion and the paper. We appreciate that.\\n\\nSince we don't have too much time remaining, could the reviewer, if possible, pinpoint exactly which part of, for example, Section 4.5 of the paper (where we say what we mean by quality of the generation as measured by the FID and quality of the reconstruction measured by the MSE) could be made more clear? We simply state exactly what we mean there, and is the same interpretation we have been saying here. We really do not see how it could be even more clearer. Furthermore, the examples we added for CelebA64 in A.14 illustrate pretty convincingly that our interepretations of those FID values are sound (we will add a reference to that appendix in Section 4.5). \\n\\nWe also changed the name to self-FID and the vertical axis of Fig.2 to \\\"generation closer to reconstruction\\\" (we will upload this new version of the pdf tomorrow, plus some other minor corrections). The use of \\\"generation\\\" in the title is correct because it does refer to generation in the absolute sense there (which is assessed by looking at *both* the MSE and the self-FID for each model). We will keep looking in the paper for any ambiguous use of the term \\\"generation\\\" and replace it with \\\"generation closer to reconstruction\\\" if it actually refers to improvement in the self-FID. But beyond that, we really cannot think of any other clarification regarding this particular issue.\\n\\nWe agree that there was some potential for confusion in the terminology, but we consider that there should not be any confusion now after the highlights we made in Section 4.5: it's as explicit as it could possibly be. We also explained our rationale for this choice of metrics to evaluate our proposal.\\n\\nFinally, a general detailed discussion about the FID as a metric for evaluating generation is obviously beyond the scope of this paper. At some point, an evaluation metric has to be chosen to do the evaluation. We chose a rather popular and widely used one. We provided several examples in different datasets so that the reader can get an intuition from the images and the corresponding self-FID values. The interpretation we make is sound. Pathological cases are avoided in our experiements. \\n\\nAs any proposed metric, FID certainly may have its drawbacks (as pointed by one reviewer), but those don't seem to be having too much of a role in our results, and, in any case, will be generic to anyone using it, not just our paper. We will add a word of caution and cite the reference suggested by the other reviewer, but we don't see any relation between this and some supposed insurmountable clarity issues in our paper.\\n\\nSo, we ask, with all honesty, what can we do to clarify more? Can the reviewer please clarify?\\n\\nBest regards.\\n\\nThe authors.\"}", "{\"comment\": \"Hello.\\n\\nWe invite you to engage with our rebuttal and consider changing your score if your points are well addressed.\\n\\nThank you.\"}", "{\"title\": \"Sorry for the late answer\", \"comment\": \"Dear authors, sorry for my late answer, health issues prevented me from participating during the last few days.\\nI have carefully read the latest version and believe that my concerns about clarity are now addressed. I am thus happy to raise my score. I thank the authors for engaging in the discussion and their dedication. I believe this paper is very interesting and, as I said before, I really enjoyed the creative approach there, I am thus very happy to raise my score to accept.\\nBest wishes\"}", "{\"title\": \"The argument about mode collapse is not correct.\", \"comment\": \"We agree that a \\\"**fully** collapsed models will generate random noise and ignore the latent\\\". But this is a completely useless model with a terrible maximal MSE.\\n\\nWe don't understand why this reviewer still brings up this point.\\n\\nNone of our models have full mode collapse. None of the standard VAE are fully collapsed in our experiments, and we did an extra experiment with a VAE that has no mode collapse at all. In all cases, it shows that the compressed VAE provides a better alternative in the practical area of the MSE/FID plane. \\n\\nWe do not consider FID on its own, but only in the context of the reconstruction quality. \\n\\nThe edit suggested of the paper makes sense and are easy to do. We will emphasize more that the FID in our paper is not the usual FID by calling it self-FID. \\n\\nWe don't understand what is the extra work the reviewer is referring to.\"}", "{\"comment\": \"I thank the authors for the clarification. This would be very useful to include in the paper for clarity.\\nHowever, my comment about good FID in collapsed models still stands. Indeed, as mentioned in my intial comment, fully collapsed models will generate random noise and ignore the latent. Sampling from $Z_\\\\\\\\epsilon$ will lead to the same result as the decoder will ignore the latents in both cases. Thus one will get low FID but a very poor image quality. As mentioned by reviewer Ww1f, it has previously been observed that low FID may not be representative of good image quality and collapsed VAEs are an illustration of such case.\\n\\nWhile I understand the author's idea of considering that a good model is able to simultaneously reconstruct correctly images from the original dataset and also good quality images in inference mode (i.e., using $Z\\\\\\\\epsilon$), my initial concern was that FID is referred to as representative of the generation quality in the paper, see Fig. 2 for example. Given what was discussed above and also with reviewer Ww1f, I find this misleading as collapsed models do not provide good generation quality. I thus think several updates of the papers would be beneficial. Specifically, I suggest that the authors:\\n1) Clearly define what the goal of FID is in their case \\n2) How they compute it\\n3) Change generation quality to a better defined term to avoid confusion.\\n\\nAt the moment I believe that the paper's idea is promising but the amount of work needed to make the argument clear is too important to accept it in its current state. I thus leave my score as it is.\"}", "{\"comment\": \"I thank the authors for the additional analysis they performed on Celeba and CIFAR as well as the provided clarifications.\", \"i_still_have_some_questions_and_comments\": [\"*Re FID and posterior collapse:* is FID a good metric for measuring generation quality? In the case of a fully collapsed VAE, the decoder will generate random noise ignoring the latent representations (which will always have a mean close to 0 and a variance close to 1) so one would expect a low FID. Despite this, the generation quality is poor (random noise) so I am not sure one could say that a low FID is always representative of a better generation (as indicated in Fig. 2 for example).\", \"*passive/collapsed variables are ignored by the decoder*: the authors mentioned in A.6 that \\\"they believe that [...] the decoder simply ignores the dimensions in question\\\". This has been shown by [1,2] which the authors may want to cite there.\", \"there are several typos in the new appendix, I encourage the authors to proof read them carefully during any subsequent revisions.\", \"all my comments from the mathematical soundness and minor comments section are yet to be adressed.\", \"if the authors do another revision, could they provide a diff file to highlight the new parts and facilitate further review?\", \"Overall, despite the updates provided by the authors, I am still not convinced that the paper quality is sufficient for acceptance in its current state. I will thus leave my score as it is for now.\", \"[1] Rolinek, M., Zietlow, D. and Martius, G. (2019). Variational Autoencoders Pursue PCA Directions (by Accident). In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).\", \"[2] Dai, B., Wang, Y., Aston, J., Hua, G. and Wipf, D. (2018). Connections with Robust PCA and the Role of Emergent Sparsity in Variational Autoencoder Models. Journal of Machine Learning Research, 19(41), pp. 1\\u201342\"]}", "{\"summary\": \"The paper proposes a new way to formulate the latents of VAE on hyperspherical coordinates. They use real MNIST data to show the improved generalization ability of VAE.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written.\", \"The relevant works are clearly listed.\", \"The idea of drawing insights from high dimensional physical systems is interesting.\"], \"weaknesses\": [\"The experiment results section is weak.\", \"The evaluation is only qualitative comparison. Is there any quantitive metric (e.g. classification/prediction error) that can be used to compare the proposed method with existing methods?\", \"Lack of comparison with other related hyperspherical VAE work listed in the section 2. related works. e.g. Hyperspherical Variational Auto-Encoder\\u2019 Davidson et al. (2018), Yang et al. (2023), Bonet et al. (2022), etc.\", \"The paper only shows results on one real dataset. How well does the proposed method generalize to more datasets?\"], \"questions\": \"See the weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We call \\\"reconstruction\\\" of $X$ to $\\\\hat{X}$. We measure how good this reconstruction is by the standard Mean Squared Error (MSE). This is correlated with the perceived level of \\\"blurriness\\\" of the reconstructed images w.r.t. the originals.\\n\\nWe now take an equal number of random samples, $Z_{\\\\epsilon}$, from the prior in latent space and decode them, $X_{\\\\epsilon}=D(Z_{\\\\epsilon})$ . We compute the FID between $\\\\hat{X}$ and $X_{\\\\epsilon}$. This in order to avoid the `blurriness', which was measured separately via the MSE, to interfere here. We call this \\\"generation\\\" measurement. \\n\\nAfter much consideration, we decided on this approach because of the following. A good VAE-based generative model should minimize both of these metrics *simultaneously*: that is, to be able to generate random samples which are *in-distribution* w.r.t. the reconstructed dataset (low FID), and such that the latter actually resembles the original dataset (low MSE).\", \"re_low_fid_in_collapsed_model\": \"first, none of the models are fully collapsed, they all still have some functional latent dimenions. This makes them similar to a fully non-collapsed model in much lower dimensions. Thus, one can have a low FID (in our sense) in those situations (we measured the FID in both the HD semi-collapsed, Fig.10, and low D non-collapsed, Fig.26), but the reconstruction suffers (high MSE). Note the similitude of the MSE in both of these examples (the FID is a bit higher in the HD case, but still the lowest for that dimension as we vary $\\\\beta$).\"}", "{\"comment\": \"We thank all the reviewers for their careful reading of the manuscript and raising of several interesting points, giving us the opportunity to expand and improve some aspects of our work and paper.\\n\\nWe will answer each reviewer one by one in the order in which their reviews appear here in the coming days. Since some of the reviewers raised similar issues, we may repeat a similar answer to each of them regarding those particular points. \\n\\nBest regards.\\n\\nThe authors.\", \"edit\": \"the Appendix is now in the Supplementary Material.\"}", "{\"comment\": \"(... continuation)\\n***\\nRe impact of differetnt values of $a_{\\\\mu,k}$: Thank you, this is a very useful question for us to better explain the more subtle points and results of our method. In the new appendix A.8 we start with a moderate amount of compression by encouraging the mean of the (cosine) angles $\\\\overset{\\\\mu}{\\\\varphi_{k}}$ to be in the same direction as the vector whose Cartesian coordinates are $(1,\\\\dots,1)$, i.e., $a_{\\\\mu,k}=1/\\\\sqrt{k+1},\\\\,\\\\forall k$. Indeed, recall that the closer we get to the north pole, the lower the volume. Nevertheless, this moderate compression is not enough to significantly improve the generation. Thus, we go to full compression mode by setting $a_{\\\\mu,k}=1,\\\\,\\\\forall k$, which encourages all the points to converge and condense at the north pole. It's only in this regime of very high compression that we get a significantly appreciably improvement in the generation.\"}", "{\"comment\": \"Re annealing: We did conduct experiments along those lines, but we found little change by including the annealing in the standard $\\\\beta$-VAE (it simply moves to a different point in the standard VAE curves, dashed lines, in Fig.2 of the paper, but the overall shape of the curves doesn't change). This is expected if our hypothesis is correct, since the samples in that case are still in the equators.\\n***\", \"re_generalization_of_other_tasks\": \"The classification task was not really included to support our higher density claims, but to complement our interpretation of the lower entropy states. The aim was not to improve or compare classification metrics, and this task is only tangential to our main claims. This is why we stopped those comparisons after we considered that we made our point in that aspect. Our code always generates those types of figures at the end of the training for any of the datasets anyway, and the results are qualitatively similar to the ones presented in the paper for MNIST (although, with lower classification accuracy for all cases, standard or compression VAE, due to the increased complexity of those other datasets). The same with the projections on the $2-$sphere.\\n***\\nRe equations, typos, etc.: thank you for pointing out these mistakes, we will update with a corrected version of the paper in the coming days.\\n***\", \"re_regimes_of_the_vae_and_posterior_collapse\": \"We added an extensive discussion related to the different regimes of the VAE and posterior collapse in new appendices, A.6, A.7, A.8, and A.12. Our finding is that, in the standard VAE, a high $\\\\beta$ value produces considerable collapse. Nevertheless, in the high dimensional case, for each dimension, these collapsed models produce the best results (for the standard VAE) in terms of FID, i.e., the generative model is good in this sense. The price to pay for this is a low reconstruction accuracy, or `blurriness' in the decoded images. Thus, these collapsed models simply behave like non-collapsed ones in a much lower dimension for the latent. It's only in the lowest dimension that we considered ($50$), that the de-collapsing of the model improves the FID; nevertheless, in these cases, due to the low dimension, *both* the collapsed and de-collapsed models have very low reconstruction accuracy, so they fall very far away from the useful zone in the MSE-FID plane of our Fig.2 in the paper.\\n\\nIt's simply not possible to produce non-collapsed standard VAE trainings in high dimensions and with a high $\\\\beta$. This explains why there's a limit on how much it can go in the MSE-FID plane, as we show in Fig.2 of our paper for CIFAR10 (and now also for CelebA64 in new appendix A.10). \\n\\nOne could try to forcefully de-collapse the models, but this alone only helps in maintaining more stable reconstructions but at the price of making the FID worse. We experimentally prove that this has to be because of the sparsity gained by the use of all the available dimensions (which make the reconstruction more expressive), since the FID only improves when we start the volume compression with our method. We discuss this in detail in A.7 and A.8.\\n***\", \"re_extra_computation_time\": \"We measured an increase of 32 per cent in the training time required *per epoch* for training a CIFAR10 model with $n=200$ and mini batch size$=200$, in comparison to the same situation for the standard VAE. We will update the paper to provide the increase rate using big O notation in terms of the dimensions in the coming days, thank you for pointing this out.\\n***\", \"re_projection_of_active_variables\": \"We already mentioned the different modes of the VAE and how they reflect in the metrics we studied. Regarding the sphere projection, since we do this projection by first diving the (Cartesian) coordinates of $\\\\mu$ into three groups (e.g., if $n=90$, from index $0$ to $29$ in the first group, $30$ to $59$ in the second, and $60$ to $89$ in the third) and then we average all coordinates within each group to obtain the corresponding $3-$D coordinates $x$, $y$, and $z$ for that $\\\\mu$, then this automatically only takes into account the active, non-collapsed coordinates (since a value of $\\\\mu_i=0$ for a same index $i$ for *all* $\\\\mu$ becomes irrelevant in the sum performed in the averaging, only the active variables contribute to the projection).\\n***\\n(continues in next comment...)\"}", "{\"metareview\": \"This paper proposes a novel approach to improve the image generation quality of Variational Autoencoders (VAEs) by applying a mechanism akin to the quenching process used to reduce the entropy of high-dimensional systems. The authors hypothesize that the poor image generation quality of VAEs at inference time is due to the sparsity of high-dimensional spaces and propose to apply a change of coordinates from Euclidean to hyperspherical during the KL divergence computation.\\n\\nThe reviewers generally found the paper to be interesting and creative, with a good potential for improving the generation quality of VAEs. However, they raised several concerns regarding the clarity of the paper, the soundness of the experiments, and the interpretation of the results.\\n\\nAfter a thorough discussion and revisions, the authors addressed many of the concerns, providing additional experimental results, clarifying the interpretation of the metrics, and improving the overall clarity of the paper. The reviewers appreciated the authors' dedication and engagement in the discussion. However, after discussion with the authors and among themselves, the paper is still very borderline, with three reviewers leaning towards acceptance and two towards rejection. We will therefore need to reject the paper in its current form. We would still like to encourage the authors to resubmit an improved version of the paper in the future.\", \"additional_comments_on_reviewer_discussion\": \"see above\"}", "{\"comment\": \"We added an edit to the previous comment in reference to collapsed models: Re low FID in collapsed model: first, none of the models in our experiments are fully collapsed, they all still have some functional latent dimensions. This makes them similar to a fully non-collapsed model in much lower dimensions. Thus, one can have a low FID (in our sense) in those situations (we measured the FID in both the HD semi-collapsed, Fig.10, and low D non-collapsed, Fig.26), but the reconstruction suffers (high MSE). Note the similitude of the MSE in both of these examples (the FID is a bit higher in the HD case, but still the lowest for that dimension as we vary $\\\\beta$). In Fig.2, these cases appear to the lower right: low FID, but high MSE. Therefore, too blurry to be considered good. Thus, by keeping track of both FID and MSE, we can make a good assessment.\\n\\nAll of our experimental results follow qualitatively what one would expect by looking at the obtained FIDs-MSE pairs and the corresponding plotted images. See A.6, A.9, and the new A.14 (for the CelebA64 case). Thus, for the purposes of our experiments and claims, we have extensive evidence that the FID in our sense works perfectly well as a measure of in-distribution wrt the reconstructed data. \\n\\n1-2) We described this in section 4.5 of the paper. We highlighted the relevant interpretations now.\\n\\n3 ) Yes, this is something we could consider (particularly the naming of the vertical axis in Fig.2).\"}", "{\"comment\": \"I thank the authors for their feedback on the reviews.\\nHowever, I am still not convinced about the benefit if sparcity and tight concentration in the latent space compared to other approaches like decreasing the latent dimension or manifold learning in the latent space to sample from the estimated posterior. I keep my grade.\"}", "{\"comment\": \"Hello.\\n\\nCould you please provide an explanation of why you still consider that the \\\"*avoidance [of posterior collapse] is the main driver of any improvement of the generative performance of the VAE*\\\" even after we conducted the experiment you suggested (*) in your initial review and then presented the results in A.7 that clearly contradict your initial intuition about this (there is no collapse, and yet the generation is bad)? We added several other appendices discussing collapse and relating the improvement in generation to the decrease of sparsity, always in non-collapsed models, too.\", \"as_for_the_lower_dimensional_model_possibility\": \"yes, that is an excellent point, and the main motivation that we had for doing the parameter sweep we present in Fig.2 of the paper. That figure shows that the overall improvement in both reconstruction and generation being closer to reconstruction being done by our compression models cannot be reached by a standard VAE working with a different choice of parameters (dimension of latent and $\\\\beta$, and also collapsed or non-collapsed).\\n\\nFinally, if you consider the insghts that we obtained regarding the training dynamics, etc., as valuable, then, also taking what we said in the paragaraphs above, invite you to reconsider the very negative score you gave initially and that is still your official score.\\n\\n(*) We are very confident in this because we had similar concerns when we started investigating these ideas and ran the exact experiment that the reviewer suggested a time ago, from which the results we added in A.7 come from. We didn't include this in the paper initially because we thought it was too much of a side point. But we changed our mind after this review and included it, it is indeed useful.\"}", "{\"comment\": \"I thank the authors for their responses and additional experiments. They address my questions and I'm willing to increase my score to 6.\"}", "{\"summary\": \"This paper explores the poor image generation quality of VAEs at inference time when the latent representation is high dimensional. After hypothesising, based on spin glasses theory from statistical physics, that the issue comes from the sparsity of high-dimensional spaces. Based on this, they propose to apply a mechanism akin to the quenching process used to reduce the entropy of such systems. Practically, this is done by implementing a change of coordinates from Euclidean to hyperspherical during the KL divergence computation and setting the priors of each dimension of these new coordinates such that the latent samples are pushed away from highly entropic regions. This results in an improved generation quality for MNIST while keeping latents that are interpretable enough to perform clustering.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality\\n========\\n- I like the creative approach of solving the issue of poor image quality generation by applying a solution to a similar problem from physics.\\nThis is quite different from previous publications about hyperspherical VAEs, where the main motivation (as far as I know) was to provide a better prior than the standard multivariate Gaussian.\\n- The proposed model also differs from others in the hyperspherical VAE literature. Usually, changing the prior and posterior distribution can be complex as the reparametrisation trick and the KL divergence need to be updated accordingly. Here, the proposed solution is an elegant change of coordinates done after the reparametrisation trick during the KL divergence computation, making it easy to implement and intuitive to understand.\\n\\nQuality/Clarity\\n===========\\n- The section on spin glasses is well vulgarised, intuitive, and reads very well for non-physicians.\\n- The paper is generally well-written and easy to follow.\\n\\nSignificance\\n=========\\n- Given the simplicity of the implementation, practitioners looking for better image generation quality with VAEs could easily adopt the proposed model.\\n- The explanation of why the generation is bad at inference time is interesting for the research community working on the learning dynamics of VAEs.\", \"weaknesses\": \"While I really like this paper's approach, I have some concerns about its empirical soundness and found several mistakes in the given equations (see major comments below). Clarifying some aspects of the paper would also strengthen it (see major and minor comments below). If these concerns were addressed, I would happily raise my score.\\n\\nMajor comments\\n=============\\n\\nExperimental soundness\\n---------------------------------\\n- The experiment compares the results of $\\\\\\\\beta$-VAEs with change of coordinates + annealing with the results obtained with $\\\\beta$-VAEs without annealing. As a result it is not possible to see if the improved generation comes from the annealing or the change of coordinates. Additional results with $\\\\\\\\beta$-VAEs using the same annealing schedule would be needed to ensure that the improvement is really due to the change of coordinates.\\n- While the experiment is partially done on CIFAR 10, it is hard to assess how several results generalise. For example, a comparison of the classification accuracy, examples of images generated, and a comparison of the projection into spheres using both models would be nice to have on CIFAR 10 as well.\\n\\nMathematical soundness\\n----------------------------------\\nIn section 1.1, several equations were incorrect.\\n- In Eq. (1) $KLD(q_{\\\\\\\\phi}(z) || p_{\\\\\\\\theta}(z))$ should be $KLD(q_{\\\\\\\\phi}(z|x) || p_{\\\\\\\\theta}(z))$\\n- In Eq. (2) the given formula is not for $KLD(z,\\\\\\\\epsilon)$ but for $-KLD(z,\\\\ \\\\epsilon)$. Furthermore, subscripts from the summations are missing. I would suggest either removing the second summation and using a matrix form like $KLD(z,\\\\\\\\epsilon) = \\\\\\\\frac{1}{2} \\\\sum^{N_b} \\\\\\\\bigl(Tr(\\\\\\\\sigma) + || \\\\\\\\mu||^2_2 - \\\\\\\\log det(\\\\\\\\sigma) - n\\\\\\\\bigr)$, or rewriting it with subscripts as $KLD(z,\\\\\\\\epsilon) = \\\\\\\\frac{1}{2} \\\\sum^{N_b} \\\\sum_{k=1}^n \\\\\\\\bigl(\\\\\\\\sigma_i^2 + \\\\\\\\mu_i^2 - \\\\\\\\log(\\\\\\\\sigma_i^2) -1 \\\\\\\\bigr)$.\\n- In Eq. (3), this is not the ELBO $\\\\\\\\mathcal{L}$ from Eq. (1) but its negative approximation $- \\\\\\\\tilde{\\\\\\\\mathcal{L}}$ which is minimised by the VAE. While the original formulation of VAEs by [1] does not contain a $\\\\\\\\beta$ term and is equivalent to setting $\\\\\\\\beta=1$, it would be interesting to briefly discuss what is the impact $\\\\\\\\beta$ when $\\\\\\\\beta > 1$ and when $\\\\\\\\beta < 1$. Indeed, the motivation for both settings is very different: the first is to provide \\\"disentangled representations\\\" [2] and to force the VAE to learn in a polarised regime (a.k.a selective posterior collapse), which is akin to a PCA-like behaviour [3,4,5], while the second aims at mitigating posterior collapse and is often used together with annealing [6]. Thus, the choice of $\\\\\\\\beta$ has a practical impact on the proposed experiment (see further discussion on the questions part below).\\n\\nClarity\\n---------\\n- To facilitate the understanding of the paper, it would be great to have the derivations from Eq. (4) to Eq. (5) and from Eq (8) to Eq (9) in appendix.\\n\\n\\nMinor comments\\n=============\\n\\nMathematical notation\\n------------------------------\\nThe current notation is sometimes confusing. For example, l. 77 $\\\\\\\\mathcal{N}(z; \\\\\\\\mu, \\\\\\\\sigma)$ reads as \\\"the univariate Gaussian with mean $\\\\\\\\mu$ and standard deviation $\\\\\\\\sigma$\\\" while it is in fact a multivariate Gaussian. A suggestion to improve this is to use a different notation for numbers, vectors, and matrices, following, for example the notation suggested in the math_commands.tex file of the ICLR template.\\n\\nClarity\\n---------\\n- l. 478, I found the sentence \\\"the quality of AE and VAE [...]\\\" confusing as AEs are not discussed anywhere else in the paper and are not used in the experiment. I would suggest removing the part about AE to make the argument clearer.\\n- I struggle to see what 32% of computation time of the KLD represent in term of additional training time. It would be easier to see with an average run time with and without change of coordinates over n seeds and k epochs. Furthermore, if this increases with the number of dimensions, an estimate of the increase rate using big O notation would be very useful for practitioners to assess whether this implementation is suitable to their needs.\\n- The seminal papers on VAEs are [1,7] which are different from the ones references. I would suggest updating this.\\n\\nTypos\\n--------\\n- l. 91 weighs -> weights\\n- l. 100 teh -> \\\"that the\\\" ?\\n- l. 133 there's -> there is\\n- l. 133 have -> has\\n- l. 157 it's -> it is\\n- l. 422 gven -> given\\n- KL divergence is inconsistently refered to as \\\"KL divergence\\\" and \\\"KLD\\\" in the paper.\\n- l.591-592, the title of Higgins et al. is capitalised while others are not.\", \"questions\": [\"As mentioned in the weaknesses section, a VAE usually learns in a polarised regime when $\\\\beta$ is sufficiently high. In this setting, the latent representations contain two kinds of variables: active and passive. The passive variables are kept as close to the prior as possible and used to lower the KL divergence, while the active variables contain the information needed for reconstruction (or further use in downstream tasks). Active variables typically do not follow the prior as the KL is kept low by the passive variables. Instead, they have a very low $\\\\\\\\sigma$ such that during the reparametrisation trick, $z \\\\approx \\\\\\\\mu$. One would usually remove passive variables for downstream tasks, only keeping the small subset of variables containing some information [8]. I wonder if keeping only active variables would change the sphere projection to something more akin to what is obtained with hyperspherical coordinates?\", \"What is the impact of using different values of $a_{\\\\mu, k}$ (keeping $a_{\\\\mu, k} \\\\neq 0$ of course)? Was there a specific reason to choose $a_{\\\\mu, k}=1$?\", \"References\", \"=========\", \"[1] Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations, vol. 2.\", \"[2] Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Shakir, M. and Lerchner, A. (2017). $\\\\\\\\beta$-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In International Conference on Learning Representations, vol. 5.\", \"[3] Rolinek, M., Zietlow, D. and Martius, G. (2019). Variational Autoencoders Pursue PCA Directions (by Accident). In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).\", \"[4] Dai, B., Wang, Y., Aston, J., Hua, G. and Wipf, D. (2018). Connections with Robust PCA and the Role of Emergent Sparsity in Variational Autoencoder Models. Journal of Machine Learning Research, 19(41), pp. 1\\u201342\", \"[5] Lucas, J., Tucker, G., Grosse, R. B. and Norouzi, M. (2019a). Don\\u2019t Blame the ELBO! A linear VAE Perspective on Posterior Collapse. In Advances in Neural Information Processing Systems, vol. 32.\", \"[6] Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A., Jozefowicz, R. and Bengio, S. (2016). Generating Sentences from a Continuous Space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning.\", \"[7] Rezende, D. J., Mohamed, S. and Wierstra, D. (2014). Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the 31st International Conference on Machine Learning, Proceedings of Machine\", \"Learning Research, vol. 32.\", \"[8] Bonheme, Lisa, and Marek Grzes. \\\"Be more active! understanding the differences between mean and sampled representations of variational autoencoders.\\\" The Journal of Machine Learning Research 24.1 (2023): 15423-15452.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Re references, typos, etc.: Thank you for pointing this out, we will correct and update the paper.\\n***\", \"re_experimental_section\": \"We addressed similar comments brought up by reviewers 1, 2, and 3 above.\\n\\nWe provided explicit samples in the case of MNIST because it is easy to appreciate the differences qualitatively and thus convey the main idea of the paper. Nevertheless, we have updated the paper now to include several more examples in the other datasets that we studied (CIFAR10 and now also CelebA64).\\n\\nThe main quantitative performance metrics were the mean square error (MSE, to measure the quality of the reconstructions) and the Frechet Inception Distance (FID, to measure the quality of the generation). FID was calculated between the reconstructed samples and the decoded random samples. *These metrics were systematically evaluated in a full parameter sweep* in CIFAR10 (and now also CelebA64), where our method shows a consistent improvement in line with our claims (Fig.2 of the paper). \\n\\nWe stress that the aim of this work is not to produce state-of-the-art generation, but to understand why VAEs are poor generators in high dimension. It should be viewed as an investigative inquiry, hence the title of the paper. The method of volume compression via hyperspherical coordinates was specifically designed to test whether sparsity is the problem.\\n\\nFor this reason comparing with other VAE variations that aim to improve generation is not relevant. For example, the references using flexible priors are focused on the so-called `prior hole problem' (mismatch between prior and the posterior). This can cause problems for the generation but is not related to our hypothesis: even with perfect match, the sparsity problem is present. \\n***\", \"re_conclusions_section\": [\"This is a fair point as we ran out of space. We will expand our conclusions section in the paper, adding the following remarks.\", \"the improvement in generation was only evaluated for the purposes of hypothesis testing, and not as absolute performance.\", \"we did not evaluate the method for high resolution and larger datasets such as imagenet.\", \"the extra computing time is about 32\\\\% more per epoch for 200 latent dimensions. In higher dimensions, the added computation increases and might become prohibitive.\", \"future research can focus in optimizing this method (or other method that takes into account the hypothesis about sparsity) for obtaining state-of-the-art results in generation and other tasks, in VAEs and other models.\", \"The use of latent representations in hyperspherical coordinates can also be further explored in several other applications (perhaps unrelated to compression and generation), by the use of the provided script for the conversion and inspired by its proof of concept of practical feasibility in the present paper.\", \"***\"], \"re_singularities_of_the_hyperspherical_coordinates\": \"This is a very interesting point and where one can appreciate the very peculiar nature of HD spaces. In low dimensions (e.g., the $2-$sphere), if one were compressing all the samples towards the north pole, the result would indeed be an island-like condensation there, centered at the pole, with many of the samples in the inner part of the island falling into the singularities of the hyperspherical coordinates. In HD spaces, however, the situation is very different: rather than a full island, one usually gets a ring-shaped distribution, with no samples at the center. This can be seen in all our histograms for angular variables when we display the results of the compression mode in several of the appendices of the paper. It is a generic effect, sometimes called the `ring or band effect' (see [1]). Thus, the north pole singularities will always be avoided in HD, even if one is deliberately trying to reach them!\\n\\n[1] Eliran Subag. The geometry of the Gibbs measure of pure spherical spin glasses. Inventiones\\nMathematicae, 210(1):135\\u2013209, 10 2017. ISSN 00209910. doi: 10.1007/s00222-017-0726-4.\\n***\\n(continues in next comment...)\"}", "{\"comment\": \"The point of the paper is to improve generation while keeping a high number of latent dimensions.\\n\\nWe demonstrated that compression allows to improve reconstruction and generation, unlike reduction of dimension as suggested (which affect negatively reconstruction). \\n\\nManifold learning is beyond the point here, we do not try to build a better generator: we just show that sparsity is a problem. We speculate that any manifold learning method would work better if the latent is compressed (future work). \\n\\nPlease note that the FID does not compare generation with original data. **In our paper FID compares the reconstruction and the generation**.\"}", "{\"comment\": \"*Regarding FID*\\nI understand that the authors don't have any collapsed models and that this case is not relevant to their analysis. My point about collapsed models was to illustrate why using FID as a \\\"good generation quality metric\\\" was misleading in some cases. Another example was also discussed by reviewer Ww1f.\\nOverall my main point about this and some of my previous points is that the paper is that the paper lacks clarity in some aspects.\\n\\n*Regarding additional work*\\nI have carefully reviewed the latest update and I acknowledge the amount of updates made by the authors and I thank them for their dedication as I believe this represents many hours of work. This has done a lot to improve the original version of the paper and I do not mean that the paper needs more work regarding experiments. \\nHowever, I believe that the paper still needs some work to improve its overall clarity.\\nI am sorry I cannot raise my score now, as I enjoyed the paper and the creative idea the authors had. I believe this paper has a nice potential but still lacks the clarity needed to convincingly make its point.\"}", "{\"summary\": \"The authors propose expressing the latent variable of a standard VAE in hyperspherical coordinates, reformulating the KL term of the ELBO loss accordingly, to reduce the sparsity in high-dimensional latent spaces and improve the generative performance of the model. Their approach draws on a parallel between high-dimensional spaces in statistical physics, specifically spin glasses, and the training dynamics of a VAE.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"To the best of my knowledge, this work introduces a novel approach to improve generative performance in VAEs by constraining latent representations, exploiting the hyperspherical coordinates formulation to reduce sparsity in the high-dimensional latent space.\", \"I find the connection between VAE training and physical systems, such as spin glasses, particularly valuable, as it provides a novel perspective for understanding the model's training dynamics.\"], \"weaknesses\": [\"The authors should revise the references (e.g., replace arXiv preprints with published versions where available, add access dates for blog references, and consider replacing Wikipedia links with more reliable sources). Additionally, some typos were noted in the main text, and Figures 4 and 5 appear in low resolution, making the text difficult to read.\", \"I think the experimental section could be strengthened. Although the primary aim of this work is to improve the model's generative performance, the authors present generated samples only on MNIST, which is a simple dataset. Also, they do not compare the generative performance of the method with any other baseline (in the introduction they mention methods that improve generative performance by using more flexible priors but do not show if the results achieved are comparable).\", \"The paper lacks a concluding discussion and does not explore potential limitations of the method or directions for future research, which I believe would add significant value.\"], \"questions\": [\"What are the main limitations of this method?\", \"Could the authors provide a comparison with additional baselines and training times? It would be valuable to compare the generative performance and training times of the proposed method with other state-of-the-art approaches to evaluate whether the additional computations required for using hyperspherical coordinates are justified by the improvements in generation performance.\", \"The authors state that \\u2018the random samples of an independent multivariate Gaussian distribution fall in the equator of a hypersphere, and thus none of them is near the singularities of the hyperspherical coordinates\\u2019. However, once the latent samples are forced away from the equator, could it be possible to fall near the singularities of the hyperspherical coordinates?\", \"The authors have presented generated images only on MNIST, a relatively simple dataset that does not require a high-dimensional latent space to capture its features. As a result, introducing additional constraints in the latent space does not appear to limit its capacity to represent information. However, with more complex datasets, how can these constraints affect the expressivity of the model given that the representations tend to overlap (as stated in Figure 1)?\", \"Comparing the results in Figure 2 is challenging because the configurations (dimensionality of the latent space) are positioned at different points (not aligned). Consequently, it is difficult to determine in which configurations or regimes the VAE with hyperspherical coordinates surpasses the vanilla VAE and vice versa.\", \"In section 4.4, the authors generate new data sampling from a von Mises\\u2013Fisher distribution with the same mean and covariance as the ones empirically calculated from the latent embedding of the full test dataset. What is the motivation to use the empirical statistics of the test set instead of the training set for generation?\", \"Could this method be extended/generalized to other distributions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We agree re the conclusion, we are working on an update. We will save room by moving the VAE description in appendix (fixing the equations issue). EDIT: all of this is now fixed in the new uploaded version.\\n\\n**Please note that the FID does not compare generation with original data (as usually done). In our paper FID compares the reconstruction and the generation**. So generated images could be blurrier but still have lower FID if they are closer to the reconstruction images. This is a very important point that some reviewers seem to have overlooked (Cf. our answer to reviewer TsTD about the performance metrics used in the paper: \\\"*FID, to measure the quality of the generation; this metric was calculated between the reconstructed, i.e., decoded, original data and the decoded random samples, in order to avoid the `blurriness', which was measured separately via the MSE, to interfere here*\\\").\\n\\nWe are not proposing a better generator, we are arguing that sparsity is an issue. In the standard VAE, there is a trade off between reconstruction and generation. When the latent is compressed, this trade off is much reduced in the case of MNIST (good reconstruction and good generation). It is still present in more complex images (CIFAR10, CelebA) but always with better performance for both reconstruction and generation in the compressed version.\\n\\nNote that the blurry generated images in Fig.10 (third row in the images panel) look more like the reconstructed data (second row of that panel), than the analogue situation in Fig.12. This is correlated with the FID values displayed there. The greater details in Fig.12 come from a lower MSE; nevertheless, those details are out-of-distribution w.r.t. the reconstructed dataset, they don't correspond to details that could be present in an image in the original dataset, it's just noise fabricated by the decoder, hence the greater FID. We have CelebA results in which, for the case of latent dimension $n=1000$ and $\\\\beta=0.09$, the trend just described for Fig.12 is even more clear (the noise in each pixel is very sharp, almost no face can be seen, but the reconstruction is very good). We will try to include this in that appendix in the limited time remaining. EDIT: We included these results in new appendix A.14 in the new uploaded version.\"}", "{\"comment\": \"Hello.\\n\\nThank you for your response and raising the overall score from the initial 3 to 6.\", \"regarding_the_other_concerns\": \"yes, unfortunately we didn't have enough time to address all of those since we tried to focuss on the more contentious issues, which required most of our attention in the edits.\\n\\nBest regards.\"}", "{\"comment\": \"Hi.\\n\\nYes, we moved the appendices to the Supplementary Material because there's a limit of 50mb for the main text. The Supplementary Material download icon should be below the abstract and visible for the reviewers, as far as I know.\"}", "{\"comment\": \"Thank you for your response. However, the current version of the manuscript does not include the appendices. I would appreciate it if you could upload the complete version.\"}", "{\"summary\": \"This paper presents an enhancement to Variational Autoencoders (VAEs) by introducing a hyperspherical latent space with a novel loss function. The method aims to improve generative quality by concentrating embeddings in denser latent regions, moving them away from the equatorial band often associated with sparsity. The approach offers compatibility with existing VAE structures, requiring minimal adaptation to the standard VAE framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022 Innovation and Relevance: The use of hyperspherical coordinates in latent space with principles drawn from statistical physics is a unique and potentially impactful innovation.\\n\\u2022 Clear Problem Statement: The paper presents the problem of sparse latent spaces in VAEs clearly, providing a well-motivated solution.\\n\\u2022 Practical Usability: The proposed method integrates easily with current VAE models and introduces a manageable computational overhead.\\n\\u2022 Clarity of Presentation: The visual support, especially in Figure 1b, effectively illustrates the approach, aiding in understanding the latent space adjustments.\", \"weaknesses\": \"\\u2022 Limited Experiments: The method is only evaluated on two datasets (MNIST and another dataset). More examples of generated samples and interpolations would better support the claim of improved density in latent space.\\n\\u2022 No Comparison with Relevant Competitors: The paper lacks comparative experiments with established hyperspherical VAEs, such as Davidson et al. (2018), or with Riemannian approaches like \\u201cA geometric perspective on VAEs\\u201d by Chadebec and Allassonniere.\\n\\u2022 Inconclusive Results: The results in downstream tasks like classification are mixed, and the paper\\u2019s claims of denser latent space do not\\nconsistently reflect in performance metrics.\\n\\nWhile the paper offers valuable theoretical insights, additional empirical support is needed to substantiate the generative benefits relative to state-of-the-art sampling methods.\", \"questions\": \"1. Could you provide comparisons with hyperspherical VAEs or other structured latent space models?\\n2. Have you explored interpolation in the latent space to support your density claims?\\n3. Why were only two datasets used? Including more diverse datasets could better validate your method.\\n4. Why would a latent space as tightly concentrated as shown in Figure 1 be desirable? This remains unclear.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"clarification on FID\", \"comment\": \"I am not sure I understand what the authors mean regarding FID here?\\n\\nGiven some dataset $X$, an encoder $E$, a decoder $D$ and the output reconstructed/generated by the decoder $D(E(X)) = \\\\\\\\hat{X}$, I assumed that the authors compared $\\\\\\\\hat{X}$ with $D(E(\\\\\\\\hat{X}))$, hence the low FID of collapsed models.\\n\\nWas this assumption incorrect? If so could the authors define what they mean by reconstruction and generation in that context? Both terms are often used interchangeably in the literature.\"}", "{\"comment\": \"Hello.\\n\\nDecreasing the dimension can indeed make the generation to look closer to the reconstructed initial data, but this comes at the price of very blurry reconstructions (because of the lack of expressive power in the latent space due to the low dimensionality). A good generative model should produce a generation that looks closer to the reconstructed initial data, but also such that the latter (the reconstructed initial data) is closer to the initial data (i.e., the reconstructions are not blurry). We showed in the parameter sweep of Fig.2 that the models that achieve this are always our compression ones rather than the standard VAE (low dimensional ones included), so we think the reviewer is clearly mistaken regarding this point.\\n\\nAbout manifold learning, the problem is that this will make the samples to be inside the manifold (that's good, since being outside will generate bad images), but if the manifold is sparse (which will happen for any standard choice in HD), then we would be in the same situation as the standard VAE. Only by adding a compression method to the manifold one could compare that with our approach. We didn't encounter such an approach in the literature.\\n\\nWe invite you to answer to these points and to change your score if these are addressed by us (we think they are; if not, please offer the corresponding rebuttal to our answers).\"}", "{\"summary\": \"The authors propose to convert the latent variables of a VAE to hyperspherical coordinates. This allows them to constrain samples from the prior distribution to a small region in latent space especially if the latent dimension is large. They provide experimental evidence that this improves the performance of the VAE when generating new data. They also provide some theoretical justification arguing that the sparsity of the latent space impairs the smoothness of the latent manifold.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe introduction of hyperspherical coordinates to better capture samples obtain from high dimensional Gaussians is useful especially in the context of constraining the latent variables\\n2.\\tThe paper attempts to establish an ambitious connection between replica symmetry breaking in spin glasses and their proposed modifications of the loss function that avoids \\u2018high sparsity\\u2019 equatorial regions of the hypersphere.\", \"weaknesses\": \"1.\\tThe connection to spin glasses is made via a formal similarity of the energy function of a spin glass and their regularization term (equation 8). This connection appears weak as it does not explain: a) how the regularization helps escaping local minima that correspond to low-quality outputs of the VAE, b) the effect on Parisi\\u2019s order parameter which seems at the heart of the spin glass theory of neural networks, and c) the role of temperature i.e., the learning rate in the proposed scheme by which desired low-entropic states are reached.\\n2.\\tI am not sure that sparsity of the latent representation is the root cause of poor generative performance in VAEs. \\u201cPosterior collapse\\u201d seems a more likely explanation (also of the empirically observed improvements in section 4) as the proposed constraints not only compress the volume but also simply decrease the variance of the prior distribution. \\n3.\\tAs a minor point, in line 189-190 the authors see sparsity as an impediment to learning a data representation as a smooth manifold. I am not sure I agree unless the objective would be to explicitly construct the manifold (e.g., via simplicial complexes). But the manifold hypothesis (like the spin glass model) is only a conceptual aid (of how to think about VAE representations of data) not a fully developed theory from which algorithms and their convergence properties can be derived.\", \"questions\": \"Have you tried running experiments without angular constraints but only radius constraints (to test against the lower variance of the prior effect)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have added now results for one extra dataset (CelebA64) in a new appendix A.10, as well as more figures with decoded images for all the considered datasets. We also included interpolations in a new appendix A.11 that further support our claims.\\n***\\nDespite the fact that some of the models in the mentioned references also work on the hypersphere in latent space, the similitude ends there. We do not really consider them competitors because both the goals and methods of these works are very different from ours. In particular, those approaches are not concerned with the sparsity issue of HD spaces as well as volume compression as a possible solution to it. \\n\\nDavidson et al. (2018), in particular, build a VAE with a KLD term between a uniform distribution on the hypersphere and a von Mises-Fisher approximate posterior. Thus, by construction, there cannot be any compression of the type we discuss in our work, it's still highly similar to the standard VAE in the sense in which the posterior distribution ends approaching a uniform distribution on the hypersphere, where, we claim, the sparsity issue arises. \\n\\nRegarding \\u201cA geometric perspective on VAEs\\u201d by Chadebec and Allassonniere, our model is only Riemannian in the sense that the hypersphere is a basic example of a Riemannian manifold. The only key insight from differential geometry that we really use is the fact that a smooth manifold can be covered with different types of coordinate systems. This latter fact, which is the central aspect of our method, is not exploited in the mentioned reference, where everything seems to be done in terms of Cartesian coordinates. Furthermore, it also doesn't seem concerned with our main hypothesis about sparsity in high dimensional spaces.\\n***\\nThe classification task was not included to support our density claims, but to complement our interpretation of the lower entropy states. The aim here was not to improve classification metrics, and this task is only tangential to our main claims. In contrast, the performance metrics in which we were interested are the mean square error (MSE, to measure the quality of the reconstructions) and the Frechet Inception Distance (FID, to measure the quality of the generation). These metrics were systematically evaluated in a full parameter sweep in CIFAR10 (and now also CelebA64), where our method shows a consistent improvement in line with our claims (Fig.2 of the paper).\\n***\\nIt's not our purpose in this work to beat state-of-the-art sampling methods. Instead, our goal was to try to *understand* why VAEs seem to fail at the generative task in high dimensions. After hypothesizing that the exponentially diverging volume/high entropy localized mainly in the equators introducing too much sparsity could be the reason (or one of the reasons), we devised a method (volume compression via hyperspherical coordinates) that could attack this directly and explicitly in order to see if we could get any improvement. If the answer is positive, then we could conclude that our hypothesis is likely correct. \\n\\nAnd this is indeed what our experiments show. The method and the improvement in the metrics are mere tools or means for a goal, which was falsifying our main hypothesis, stated in the title of the paper. Our idea is to highlight this point and its connection with statistical physics, so that the broader research community could be benefited from this knowledge, not just those concerned with VAEs or state-of-the-art sampling methods there. Our points are mainly about high dimensional spaces and how they are becoming ubiquitous in modern machine learning, so that we should investigate more about their nature and effects. We have observed considerable interest on this point in the machine learning community.\\n\\nAs a secondary point, we highlight the use and potential of hyperspherical coordinates representations themselves for manipulating data in latent spaces, for which we provide a simple, vectorized, and relatively cheap (in computing terms, for moderate high dimensions, generally lower than $1000$) script for the conversion; the compression method and its use in this paper serves as one possible application, and by no means we think it exhausts the full potential. As in the sparsity issue, it's about ideas and offering proof of concept for these ideas.\"}", "{\"comment\": \"The point of the paper is to improve generation while keeping a high number of latent dimensions. Regarding the avoidance of collapse, please check appendix A.7 and A.8 where we explicitly show, based on the suggested experiment, how it's not enough by itself to improve the generation even in MNIST. Then, in A.8., we show how it starts to improve only after the compression.\\n\\nWe demonstrated that compression allows improving reconstruction and generation, unlike reduction of dimension as suggested (which affect very negatively the reconstruction, i.e., very blurry images). \\n\\nWe do agree that the insight about training dynamics is very interesting and worth publishing!\"}", "{\"comment\": \"Please note that the FID does not compare generation with original data (as usually done). In our paper FID compares the reconstruction and the generation. So generated images could be blurrier but still have lower FID if they are closer to the reconstruction images (i.e. the FID in our paper is relative to the reconstruction, not to the original data).\\n\\nThis is a very important point that some of the reviewers seem to have overlooked. \\n\\nThank you for the additional references, we will update the paper. \\n\\nWe wanted to provide the additional materials ASAP for this review discussion. We will update the manuscript to fix the errors/typos that you pointed out, thank you. We will save room by moving the VAE description in appendix (fixing the equations issue). EDIT: many of these issues are now fixed in the new uploaded version. We are still working in fixing/adding the references and typos.\"}", "{\"comment\": \"(... continuation)\\n***\", \"re_figure_2\": \"This figure contains our central experimental results, so we have now included more details. Results are plotted as they are, we don't understand how they can be aligned. We will include the extra explanation below in the appendix A.5.\\n\\nOne could start first by looking at Fig.7 in appendix A.5, to familiarize with the basic trends (we offer in that figure an explicit break up of the results in terms of both latent dimension and KLD gain $\\\\beta$). In the standard VAE, for a fixed $\\\\beta$, as we increase the latent dimension, the FID increases (worse generation), but the MSE decreases (more sharp, less blurry images); for a fixed latent dimension, as we increase $\\\\beta$, the FID decreases (better generation), but the MSE increases (less sharp, more blurry images). \\n\\nIn the compressed version, we know, from the MNIST example in Fig.1, that it can improve the generation in HD cases where the standard VAE produces meaningless images for decoded random samples. But this comes at the price of losing some reconstruction quality. On the other hand, we also know that, in high dimensions and with a high $\\\\beta$, the standard VAE is prone to collapse and then give better generation but with worse reconstruction quality (like a non-collapsed model in lower dimensions). \\n\\nIt is important to plot all of these different regimes in a single MSE-FID plane, to be able to compare the collapsed HD, high $\\\\beta$ standard VAE with a non-collapsed results in lower dimensions. It is also interesting to compare all of these cases with the compressed version: there are regions on the MSE-FID that are only accessible via the compression VAE.\", \"this_is_what_we_find\": \"if we select an acceptable value for the MSE (e.g., $7.5$; higher values are considerably more blurry), the lowest FID value that any standard VAE model is around $39$ (for any latent dimension and any $\\\\beta$, and any mode, collapsed or not). For the same MSE value, the compressed versions can achieve lower FIDs, of around $28$.\\n\\nThus, the compressed model is able to retain a given `good' reconstruction in HD while keeping a low FID, something not possible with a standard VAE. \\n***\", \"re_empirical_statistics\": \"We tried both and there was only a marginal difference, not relevant enough. We preferred the test version since it takes less time (fewer samples) and was easier and more natural to implement in the code (since this calculation is done during testing).\\n***\", \"re_other_distributions\": \"Interesting question. The method is by construction adapted to distribution on the hypersphere and the KLD expression is only valid for Gaussian prior. A different prior would probably require a sampling-based divergence method (E.g. sliced Wasserstein), which we want to investigate in future work.\"}", "{\"comment\": \"I would like to thank the authors for their response and acknowledge the effort they have put into improving the manuscript during the rebuttal phase. As a result, I am slightly increasing my score. My comment on the FID metric was meant to point out that most of the discussion about improved generation performance in the paper focused on FID scores. Since this metric has known limitations, I believe the discussion should not rely **entirely** on it, especially given that the main claim of the paper focuses on generation performance in VAEs. Adding and discussing examples, such as the CelebA case, provides helpful context and makes the results easier to interpret. However, as Reviewer 8unt says, the text should be clearer regarding the metric employed and what the authors mean when they say they improve generation.\\n\\nAdditionally, I believe it would be helpful to include samples from both the prior and the aggregated posterior in a visualization similar to Figure 1 (or any other visualization the authors deem appropriate). This would show how well they align and further illustrate that the issue is not related to the alignment of the posterior and prior, but rather to sparsity. In the end, introducing flexible priors that better match the aggregated posterior has been shown to improve generation performance in the literature, so I think a discussion in this regard could be helpful. Please let me know if you have already included a discussion on this topic and I may have missed or overlooked it\", \"minor_comment\": \"I still have trouble reading the text in Figures 4 and 5 given the low resolution, and authors should revise the references and links included in the paper (as I mentioned in my first review).\"}", "{\"comment\": \"We invite the reviewer to read the interchange with reviewer 8unt below, for a follow-up on these same raised issues. Thank you.\"}", "{\"comment\": \"The evaluation is only qualitative in the case of MNIST, since it's easy to appreciate the differences there.\\n\\nThe performance metrics in which we were interested are the mean square error (MSE, to measure the quality of the reconstructions) and the Frechet Inception Distance (FID, to measure the quality of the generation; this metric was calculated between the *reconstructed*, i.e., decoded, original data and the decoded random samples, in order to avoid the `blurriness', which was measured separately via the MSE, to interfere here). These metrics were systematically evaluated in a full parameter sweep in CIFAR10 (and now also CelebA64), where our method shows a consistent improvement in line with our claims (Fig.2 of the paper).\\n\\nIt's beyond the purposes and scope of the current paper to evaluate the discussed methods on further downstream tasks, like classification, and optimization of metrics there for state-of-the-art results. Instead, our goal was to try to *understand* why VAEs seem to fail at the generative task in high dimensions. In particular, it was falsifying our main hypothesis, stated in the title of the paper. Our idea is to highlight this point and its connection with statistical physics, so that the broader research community could be benefited from this knowledge. Our points are mainly about high dimensional spaces and how they are becoming ubiquitous in modern machine learning, so that we should investigate more about their nature and effects. We have observed considerable interest on this point in the machine learning community.\\n***\\nDespite the fact that some of the models in the mentioned references also work on the hypersphere in latent space, the similitude ends there. We do not really consider them suitable for comparison because both the goals and methods of these works are very different from ours. In particular, those approaches are not concerned with the sparsity issue of HD spaces as well as volume compression as a possible solution to it. \\n\\nDavidson et al. (2018), for example (and related elaborations), build a VAE with a KLD term between a uniform distribution on the hypersphere and a von Mises-Fisher approximate posterior. Thus, by construction, there cannot be any compression of the type we discuss in our work, it's still highly similar to the standard VAE in the sense in which the posterior distribution ends approaching a uniform distribution on the hypersphere, where, we claim, the sparsity issue arises. \\n***\\nWe have added now results for one extra dataset (CelebA64) in a new appendix A.10 that further support our claims.\"}", "{\"comment\": \"In the main text we chose to make an emphasis on the regularization term analogy because this aspect was the most clearly noticeable or immediate, and directly related with our proposal of priors in hyperspherical coordinates. Regarding the other specific points, the analogy goes like the following:\\n\\nb) while we did not mention it explicitly in the main text for lack of space, we do make extensive use of the relevant order parameter here (given by the law of the inner product between replicas) in our experiments. We use it to check if there is an overall replica symmetry breaking in our latent distribution. This parameter is explicitly plotted in all our reported figures in the appendices as a dashed red line in the same part of the figure where we plot the histograms for the norms of $\\\\mu$ and $z$. \\n\\na) and c) The gain $\\\\beta$ here has the role of the inverse temperature, $\\\\beta=1/T$. \\n\\nIn spin glasses and complex systems, the energy function has exponentially many local minima in the equatorial region of the hypersphere. To overcome them, a very strong signal or bias towards the desired region is necessary at the beginning, together with a rapid cooling or quenching. \\n\\nThus, our initial high $\\\\beta$ (i.e., very low temperature $T$) setting, and in the presence of the high intensity hyperspherical external magnetic fields as bias in directions away from the equator, should make the gradient descent dynamics to quickly tend towards a low temperature distribution with replica symmetry breaking. Indeed, this is what we observed in our experiments, since we check for the replica angle, as mentioned before. \\n\\nThis initial strong compression helps escaping those undesirable equatorial minima. Nevertheless, the obtained state shows too much overlapping between samples, so we then perform the annealing (i.e., lower the $\\\\beta$, or increase the temperature $T$, and also lower the intensity of the magnetic fields) in order to allow the system to relax the strong order introduced by the initial bias and, in this way, transition to a replica symmetry breaking state with a bigger angle between replicas. This decreases the MSE and makes the decoded images more sharp, at the cost of some generation quality. \\n\\nThis is fully consistent with the spin glass analogy in a quenched and then annealed system, where the glass, always in the replica symmetry breaking phase, jumps from one so-called `pure state' to a different pure state, i.e., goes back up a bit in the ultrametricity tree/hierarchy of the replica angle values. But the system has escaped the zone with exponentially many local minima in the equator. More details and figures are provided in a new added appendix A.9.\\n***\\nWe added an extensive discussion related to posterior collapse in new appendices, A.6, A.7, A.8, and A.12. Our finding is that, in the standard VAE, a high $\\\\beta$ value produces considerable collapse. Nevertheless, in the high dimensional case, for each dimension, these collapsed models produce the best results (for the standard VAE) in terms of FID, i.e., the generative model is good in this sense. The price to pay for this is a low reconstruction accuracy, or `blurriness' in the decoded images. Thus, these collapsed models simply behave like non-collapsed ones in a much lower dimension for the latent. It's only in the lowest dimension that we considered ($50$), that the de-collapsing of the model improves the FID; nevertheless, in these cases, due to the low dimension, both the collapsed and de-collapsed models have very low reconstruction accuracy, so they fall very far away from the useful zone in the MSE-FID plane of our Fig.2 in the paper.\\n\\nIt's simply not possible to produce non-collapsed standard VAE trainings in high dimensions and with a high $\\\\beta$. This explains why there's a limit on how much it can go in the MSE-FID plane, as we show in Fig.2 of our paper for CIFAR10 (and now also for CelebA64 in new appendix A.10). \\n\\nOne could try to forcefully de-collapse the models, but this alone only helps in maintaining more stable reconstructions but at the price of making the FID worse. We experimentally prove that this has to be because of the sparsity gained by the use of all the available dimensions (which make the reconstruction more expressive), since the FID only improves when we start the volume compression with our method. We discuss this in detail in A.7 and A.8.\\n***\", \"re_question\": \"Yes, we tried this indeed. We have now updated the paper with explicit examples that make the case for our point, from this perspective too, about the sparsity. In appendix A.7, we show an example in which posterior collapse is avoided (in both the Cartesian and hyperspherical coordinates representations), while the distribution of $\\\\mu$ is still similar to a uniform distribution on the hypersphere, like in the standard VAE. Nevertheless, this is not enough to guarantee good generation, even with a high $\\\\sigma$ as in the mentioned example.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response! I appreciate the additional experimental results and your clarifications. However, I still have some concerns:\\n\\n- While I acknowledge your response regarding the conclusions, I have noticed that there is no dedicated conclusions section in the revised paper. I think it is important that the authors include this. As the paper has already reached the page limit, it would be helpful to see how they plan to restructure the main text.\\n\\n- As Reviewer 8unt pointed out, the equations, typos, and references have not yet been corrected in the revised version. I would also recommend highlighting these corrections in a different color to make the revision process easier.\\n\\n- Regarding the CIFAR10 generation examples, I share Reviewer 8unt\\u2019s concerns that FID alone may not be the best way of evaluating generation performance. In Figure 10, we observe lower FID scores but blurry generated images, while in Figure 12, higher FID scores correspond to more visually appealing images with greater detail. A similar discrepancy can be seen between Figures 16 and 17. This aligns with findings in prior work suggesting that FID does not always correlate with human evaluations, potentially due to its reliance on the pre-trained Inception-V3 model [1]. Since the goal of the paper is to investigate why VAEs perform poorly as generators in high-dimensional regimes, it would strengthen the claims if the Authors provided a more comprehensive evaluation of generation performance.\\n\\nIf these concerns are adequately addressed, I would be happy to reconsider and raise my rating. For now, however, I am maintaining my current score.\\n\\n**References**\\n\\n[1] Stein, George, et al. \\\"Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"comment\": \"Hello.\\n\\nThank you for your engagement and willingness to raise the score.\\n\\nRegarding the FID, your initial concern with this metric is based on what we consider a clear misunderstanding and reading of the presented figures. We provided an explicit rebuttal to your comment in an edit, so we will repeat it in case it was missed: Note that the blurry generated images in Fig.10 (third row in the images panel) look more like the reconstructed data (second row of that panel), than the analogue situation in Fig.12. This is correlated with the FID values displayed there. The greater details in Fig.12 come from a lower MSE; nevertheless, *those details are out-of-distribution w.r.t. the reconstructed dataset*, *they clearly don't correspond to details that could be present in any image in the original dataset, it's just noise fabricated by the decoder, hence the greater FID*. We have CelebA results in which, for the case of latent dimension $n=1000$ and $\\\\beta=0.09$, the trend just described for Fig.12 is even more clear (the noise in each pixel is very sharp, almost no face can be seen, but the reconstruction is very good). We included these results in new appendix A.14 in the new uploaded version.\\n\\nThus, whatever issue that the FID may have, is clearly **not** present in our experiments in the range and scope in which we are operating. In all the explicit examples, the values follow the qualitative feel of the images. Then, the choice of FID is sound, good, and enough for our purposes. We consider that any further discussion of this point and unwillingness to raise the score should clearly include an explicit rebuttal to the details in the answer of the previous paragraph, which are the actual points you initialy raised. Furthermore, we already highlighted very explicitly the meaning and way in which we assess the generation quality in Section 4.5, as we already mentioned to reviewer 8unt.\\n\\nWe included a new figure, Fig.32 in A.15, where we show decoded samples from the approximate aggregate posterior in a standard VAE. There is little to no improvement in the generation wrt to the standard VAE. Flexible priors may indeed improve the generation, but for different reasons (reduction of the mismatch). We have provided extensive evidence by now that the main issue in HD is the sparsity, mismatches between prior and posterior are only an additional on top of that. Even more. In Fig.14 of A.8, we take a moderately compressed model (by our method) and show the 3D-embedding of the approximate aggregate posterior, which is a von Mises-Fisher-like distribution, and it shows no holes or cracks (since our method, by compressing things, makes the clusters more packed and closer to each other). If the philosophy of the mismatch between prior and posterior as root cause of bad generation were true, then sampling from such an approximate aggregate posterior von Mises-Fisher-like distribution should produce better generation. But we do exactly that in that figure and we don't see any improvement. It's only when we do a full compression mode that we get the improvement in generation. Thus, it was the sparsity. See also new appendix A.15 for even more elaboration on these points.\\n\\nThank you.\"}" ] }
4xBew7kuYB
Studying the Effects of Training Data on Small Language Models
[ "Ivan Lee", "Taylor Berg-Kirkpatrick" ]
Prior work has found that training very small language models (SLMs) on synthetic children's stories allows them to generate coherent text, comparable to much larger models. These stories are claimed to encompass the vocabulary and factual knowledge base of a 3-4-year-old child, capturing the ``essence of natural language." Because of these claims, it is tempting to attribute the findings to the high readability (i.e., simple language) of children's stories, drawing a parallel to how children learn language. Is the human concept of readability relevant in the context of language model training, or are these findings better explained by other properties of the data? In this study, we investigate this by first validating several automatic readability measures. We then create synthetic corpora with varying levels of readability and assess the coherence of text generated by SLMs trained on these corpora. We find that training on high readability text is not a prerequisite for coherent SLMs. Specifically, SLMs trained on data with substantially more complex language also exhibit the same abilities as those trained on simple language. Moreover, training on simple language does not lead to the earlier development of coherence during training.
[ "small language models", "pretraining" ]
Reject
https://openreview.net/pdf?id=4xBew7kuYB
https://openreview.net/forum?id=4xBew7kuYB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zpePxIUe3l", "zWmigiuT1v", "zQH1etUeyD", "yCZ296fjMM", "wegto1B04D", "uUKVP0dteB", "tKXTiOSV4J", "s6r6G3Avdd", "rAWtRxojox", "qPVCjHvCAO", "gfBHCLHRXG", "dtQZW0dBV6", "O5kstK4pWR", "O4uvKtFNcr", "N7K0eWXypz", "Mf1RZsBnr5", "M7T3sOlhOZ", "KsppJHaRXm", "KfAw5pRS8R", "J9Oo79Nt4M", "FuGbUJAlGs", "FcZRNzDBZS", "Bbw4pXSBHN", "Aa72eC1QzG", "ACiy0VdfmH", "8bqkolMLfT", "5hoEFntiF3", "38S8mcjbUc", "1HhuCTsGP7" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730549679331, 1732312158745, 1737524250770, 1732671791974, 1733035272057, 1732906424084, 1732586467118, 1731741187929, 1732495802939, 1733183266535, 1732477608485, 1732312515202, 1732945066382, 1732312079649, 1732495753281, 1730695752914, 1732312630556, 1734700566766, 1732495775356, 1730697517694, 1732312118915, 1732312578419, 1732495688375, 1733122852918, 1733145643405, 1732312413967, 1732618682040, 1733102679970, 1733122864460 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_WT8i" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_r6Wo" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_h919" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_r6Wo" ], [ "ICLR.cc/2025/Conference/Submission13303/Area_Chair_TS51" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Area_Chair_TS51" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_h919" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Area_Chair_TS51" ], [ "ICLR.cc/2025/Conference/Submission13303/Area_Chair_TS51" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_wvMq" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Area_Chair_TS51" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_r6Wo" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_WT8i" ], [ "ICLR.cc/2025/Conference/Submission13303/Reviewer_wvMq" ], [ "ICLR.cc/2025/Conference/Submission13303/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper discusses the effects on training small language models by changing some features related to the concept of readability of one particular dataset. The experiments show no effects.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a new dataset with some added features in relation to a previous dataset (TinyStories).\", \"weaknesses\": \"The scientific contribution of this paper is limited, as it tackles a very narrow (and somewhat artificial) research question, and the experiments show no discernible effects whatsoever. The research question is somewhat artificial in the sense that the concept of readability (in humans) concerns the cognitive load of *interpreting* a text, which is not the same thing as *learning* a statistical language model from a text. In particular since readability is usually defined in terms of features related to frequency and length of individual tokens, but the paper does not discuss the influence of tokenization on the learning abilities of language models. It is therefore not at all clear (to me) why the concept of readability would have anything at all to do with how well a statistical language model performs. The experiments included in the paper confirms that it does not. The paper also contains an experiment that shows that the concept of readability and the concept of text quality (as interpreted in terms of perplexity and coherence) are unrelated, which is exactly what you would expect given the definition of these concepts. As such, it is difficult to see what novel knowledge this paper contributes with.\", \"questions\": \"When measuring quality, you use a set of open models. Why not simply use a state of the art model such as GPT-4o or Claude3.5 instead?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"References\", \"comment\": \"#### References\\n[1] \\nEldan, R., & Li, Y. TinyStories: How Small Can Language Models Be and Still Speak Coherent English? arXiv preprint arXiv:2305.07759 (2023). https://arxiv.org/abs/2305.07759\\n\\n[2] Placani, A. Anthropomorphism in AI: hype and fallacy. AI Ethics 4, 691\\u2013698 (2024). https://doi.org/10.1007/s43681-024-00419-4\\n\\n[3] Shanahan, M. Talking About Large Language Models. arXiv preprint arXiv:2212.03551 (2023). https://arxiv.org/abs/2212.03551\\n\\n[4] \\nDeshpande, A., Rajpurohit, T., Narasimhan, K., & Kalyan, A. Anthropomorphization of AI: Opportunities and Risks. arXiv preprint arXiv:2305.14784 (2023). https://arxiv.org/abs/2305.14784\\n\\n[5] Abercrombie, G., Cercas Curry, A., Dinkar, T., Rieser, V., & Talat, Z. Mirages. On Anthropomorphism in Dialogue Systems. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 4776\\u20134790 (2023). Association for Computational Linguistics, Singapore. https://doi.org/10.18653/v1/2023.emnlp-main.290\\n\\n\\n[6] Cohn, M., Pushkarna, M., Olanubi, F., Mengesha, Z., Moran, J., Padgett, D., & Heldreth, C. Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues on User Trust in Large Language Models. Late Breaking Work (2024).\\n\\n[7] Power, A., Burda, Y., Edwards, H., Babuschkin, I., & Misra, V. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets. arXiv preprint arXiv:2201.02177 (2022). https://arxiv.org/abs/2201.02177\\n\\n[8] https://www.quantamagazine.org/tiny-language-models-thrive-with-gpt-4-as-a-teacher-20231005/\\n\\n[9] Warstadt, A., Mueller, A., Choshen, L., Wilcox, E., Zhuang, C., Ciro, J., Mosquera, R., Paranjabe, B., Williams, A., Linzen, T., & Cotterell, R. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning. Association for Computational Linguistics, Singapore (2023). https://aclanthology.org/2023.conll-babylm.pdf\\n\\n[10] https://www.reddit.com/r/MachineLearning/comments/13j0spj/r_tiny_language_models_below_10m_parameters_or/\\n\\n[11] Haga, A., Sugawara, S., Fukatsu, A., Oba, M., Ouchi, H., Watanabe, T., & Oseki, Y. Modeling Overregularization in Children with Small Language Models. Findings of the Association for Computational Linguistics: ACL 2024, 14532\\u201314550 (2024). https://doi.org/10.18653/v1/2024.findings-acl.865\\n\\n[12] Bunzeck, B., & Zarrie\\u00df, S. GPT-wee: How Small Can a Small Language Model Really Get? Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, 35\\u201346 (2023). https://doi.org/10.18653/v1/2023.conll-babylm.2.\\n\\n[13] Edman, L., & Bylinina, L. Too Much Information: Keeping Training Simple for BabyLMs. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning, 89\\u201397 (2023). https://doi.org/10.18653/v1/2023.conll-babylm.8.\\n\\n[14] Yam, H. M., & Paek, N. J. What Should Baby Models Read? Exploring Sample-Efficient Data Composition on Model Performance. arXiv preprint arXiv:2411.06672 (2024). https://arxiv.org/abs/2411.06672.\\n\\n[15] Steven Y. Feng, Noah D. Goodman, and Michael C. Frank. Is child-directed speech effective\\ntraining data for language models? EMNLP 2024. https://arxiv.org/abs/2408.03617\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for providing further explanations and editions. Based on the paper and the comment made during the rebuttal, I can see that the result matters but is not significant. The authors presented n-gram diversity correlating with learnability to support their hypothesis, but they did not explain the underlying reasons for their observations. The results show that complex language data neither helps nor harms the coherence of SLMs. From the appendix, the model output is coherent with complex and simple language. The discussion of how much the new proposed research direction might be helpful to future works in this area is unclear. While this paper identifies interesting problems, the paper\\u2019s contribution to the broader community is limited. I decided to keep my original score.\"}", "{\"comment\": \"> The authors presented n-gram diversity correlating with learnability to support their hypothesis, but they did not explain the underlying reasons for their observations.\\n\\nIn our general response, we proposed an explanation for the underlying reasons behind our observations. Specifically, we hypothesized that the primary difference between the datasets is their diversity, as measured by unique $n$-gram count. FineWeb (along with Dolma and SlimPajama), which is derived from web crawl data, encompasses a much broader range of language than our other datasets, which were generated by a LM using template-based prompts with minor variations. We then presented correlative evidence supporting this hypothesis.\\n\\nIf by \\\"did not explain the underlying reasons,\\\" the reviewer means that we did not establish causality, then we contend that providing correlative evidence is reasonable, given that the primary contribution of our paper is a careful and controlled study of the effects of training data readability, which was not performed by prior work.\\n\\nIn particular, [1] examined a single dataset characterized by child-directed language. This has led some researchers and the public to mistakenly attribute the findings of [1] to this type of language, thereby promoting the anthropomorphization of LMs and a misunderstanding of how LMs learn. To clarify this misconception, we analyzed five synthetic datasets and three web crawl-based datasets, each with varying readability levels, and demonstrated that child-directed language was not necessary for achieving coherent SLMs.\\n\\n[1] also claims that TinyStories consists solely of child-directed language, but this claim was never validated. We address this issue in Section 3 by conducting a comprehensive study of various readability measures. Additionally, we introduced a readability measurement method that strongly correlates with human experts' assessments.\\n\\nGiven the influence of [1] (as discussed in the general response), we hope the reviewer sees the importance of addressing the gaps in prior work and correcting the misunderstandings that have resulted from it.\\n\\n> The results show that complex language data neither helps nor harms the coherence of SLMs. From the appendix, the model output is coherent with complex and simple language.\\n\\nThis evidence helps clarify that child-directed language is not responsible for the results in [1]. If the reviewer is interested in the differences between simple and complex language in affecting coherence, Figure 3 may be relevant. Surprisingly, we found that coherence is achieved _earlier_ in training with complex language compared to simple language.\\n\\n> The discussion of how much the new proposed research direction might be helpful to future works in this area is unclear. While this paper identifies interesting problems, the paper\\u2019s contribution to the broader community is limited.\\n\\nOur goal is to correct a misunderstanding within the research community and the public. We believe this will benefit future work, as research based on incorrect assumptions from misinterpreted studies can lead to inaccurate conclusions, further propagating misinformation.\"}", "{\"comment\": \"Thank you for your response. Could you elaborate on why the contribution is not considered substantial enough? Without this information, it is challenging for us to address your concerns.\"}", "{\"comment\": \"I raised my score because the authors addressed my concerns and also presented new results following suggestions from other reviewers, which I believe make the paper stronger.\\n\\nYes, the paper is narrowly scoped, reassessing the interpretation of a single paper. However, I agree that the TinyStories paper *has* been widely misinterpreted, and the misinterpretation *is* harmful, leading to misunderstandings of how LMs learn, misleading verbiage for the general public, and potentially more human-inspired model development that is \\\"on the wrong track.\\\"\\n\\nIf the paper is not accepted at ICLR, I encourage the authors to try an *ACL venue, where I believe the audience would be more receptive.\"}", "{\"summary\": \"This paper investigates the generation abilities of SLMs due to the readability of the training data.\\n\\nThey include a set of definitions of readability and generate two synthetic datasets (LlamaTales-Jr and LlamaTales-GRE) with the same data generation process. At the same time, LlamaTales-Jr is more readable, and based on the experiment, they found that the generation abilities of SLMs have no relationship with the readability of the training data. Simple language training would not accelerate coherence development.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper's question of investigating the assumption is meaningful.\\n\\nThis paper has a detailed definition of quality, evaluation criteria, and the experiment process. In the Appendix, the author gave a couple of examples of data generation. Experiments are conducted and hypotheses are evaluated.\", \"weaknesses\": \"* **Lack of Novelty and Insights**:\\nThe main contribution of this paper is the empirical experiment on the investigation of the assumption. However, the author did not explain the reason behind the findings from their experiment and did not experiment on insights such as why it exist, how such findings matters. While the paper's finding is definitely meaningful to the machine learning and natural language processing community, the scope of this paper is lacking, and more insight is needed for providing a reason as for why the results are found this way.\\n\\n* **Lack of clarity**:\\nWhile the author has detailed the experiment, the main paper is not well organized. Following the flow of the paper, figures/tables/sections are not consistent and be placed close to the context without disrupting the reading flow. Table 5-16 sampled prompts are not referenced in the main discussion. Figure 16 is not referenced in the main discussion and it is not clear why this figure is placed.\\n\\n* **Experiment details**:\\nThe prompt used for LLM for evaluation and data generation is consistent but I am concerned if such a prompt would lead to bias. There\\u2019s no discussion of other prompts being used in the experiment and no comparison of the result with other prompting methods.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Reminder] Response to Authors\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is drawing to a close, I would appreciate your response to the authors' rebuttal at your earliest convenience.\\n\\nBest Regards,\\n\\nArea Chair\"}", "{\"comment\": \"With tonight being the final opportunity for reviewers to ask questions, we wanted to check if there are any additional clarifications we can provide. If you feel that we have satisfactorily addressed your concerns, we hope you might consider updating our score. Thank you again for your time and assistance in improving this paper.\"}", "{\"title\": \"Summary of new experiments, figures, and tables\", \"comment\": [\"We have included a summary of our new experiments, along with the corresponding tables and figures, to give reviewers a clear overview of our updates.\", \"1. **Five new datasets** (3 synthetic: Llamatales-{History, Sports, News} and 2 human-authored: Dolma, SlimPajama)\", \"Goal: Demonstrate more correlative evidence for why $n$-gram diversity, rather than child-directed language, may better explain our findings.\", \"Table A1: Coherence and readability of training splits (measured with LLM-as-a-Judge)\", \"Takeaway: Similar to LlamaTales-GRE, our new synthetic datasets score high on coherence and low on readability (relative to TinyStories and Llamatales-Jr).\", \"Figure A9: Coherence of SLMs trained on new datasets versus public models.\", \"Takeaway: Consistent with our original findings, SLMs trained on our new synthetic datasets are competitive with much larger models. This is not true for SLMs trained on our new human-authored datasets.\", \"Figures A26-28: Histograms of $n$-gram diversity (unique $n$-gram counts for various choices of $n$) across our datasets.\", \"Takeaway: Easy-to-learn data (TinyStories, LlamaTales) exhibit far less $n$-gram diversity than hard-to-learn data (FineWeb, SlimPajama, Dolma).\", \"Figure A29: Unique 3-grams (training data) versus _learnability_ (coherence of text generated by a 33M parameter SLM divided by the coherence of the training data used for that SLM).\", \"Takeaway: Low $n$-gram diversity strongly correlates with high learnability.\", \"Figures A23-25: Prompts used to generate our synthetic data.\", \"Table A21: Random examples from our new datasets.\", \"Table A22: Random generations by SLMs trained on our new synthetic data.\", \"2. **Two new prompt variants for LLM-as-a-Judge**\", \"Goal: Address reviewer r6Wo's concern that\", \"> The prompt used for LLM for evaluation and data generation is consistent but I am concerned if such a prompt would lead to bias. There\\u2019s no discussion of other prompts being used in the experiment and no comparison of the result with other prompting methods.\", \"Figures A12-15: Two new prompt variants: (a) instructing the LLM to generate an analysis _before_ assigning a score, and (b) providing the LLM with positive/negative examples in its instructions. We apply both variants to measure coherence and readability.\", \"Figures A35-36: Correlation matrices of the prompt variants.\", \"Takeaway: When measuring coherence, we observed no differences among the variants; all were strongly correlated with our model ranking (Section 4.2). When measuring readability, all variants correlated strongly with human experts, but generating an analysis before assigning a score was the weakest of the three.\", \"3. **Five new metrics:** fluency, clarity, consistency, grammar, and creativity\", \"Goal: address reviewer wvMq's concerns:\", \"> The ignorance of other dimensions of quality (for example, as authors also mentioned, clarity and fluency) makes any statements about \\\"generation abilities of SLMs\\\" an overclaim.\", \"> The quality measurement doesn't use any metrics from the original TinyStories paper: grammar, creativity, consistency with the beginning of the story.\", \"Figures A4-A8: Results of evaluating SLMs (trained from scratch) and public LMs with the new metrics.\", \"Takeaway: All new metrics (except for creativity) strongly correlate with our original metric: coherence. This supports our original finding that SLMs trained on complex language (instead of the simple language of TinyStories and LlamaTales-Jr) can still be competitive with much larger LMs.\", \"Creativity did not correlate with any of our quality measures and often ranked generations by toy models such as GPT2-small and Pythia-70M as being highly creative, suggesting this is not a good measure of text quality (Figure A8).\", \"Figures A16-20: Prompts for our new metrics.\"]}", "{\"comment\": \"Thank you for the constructive feedback. We address your three points below.\\n\\n1. In our revised draft, we repeated our experiments with alternative measures of quality (clarity and fluency) and observed that the results are consistent with our original findings: SLMs are competitive with much larger LMs when trained on either the simple language of TinyStories/Llamatales-Jr or the complex language of LlamaTales-GRE.\\n\\n2. We also repeated our experiments for the metrics used in [1] (grammar, consistency, and creativity) and again observed that, with the exception of creativity, the results are consistent with our original findings. We found that creativity was the only metric uncorrelated with all other quality measures, including our non-LLM measure (model ranking), suggesting it might not be a good measure of quality.\\n\\nExperimental results for points 1 and 2 are shown in Figures A4-8, and the prompts are shown in Figures A16-20.\\n\\n3. Our revisions demonstrates a correlative (though not necessarily causal) relationship between $n$-gram diversity (unique $n$-gram counts) and dataset learnability. For a more thorough discussion, please refer to \\\"What explains our results?\\\" in the general response.\"}", "{\"comment\": \"Thank you again for your constructive feedback. As the discussion period draws to a close, we would like to ensure that we have adequately addressed your concerns. We are happy to provide further clarification on any part of our response. A summary of our changes is available at the end of our general response.\"}", "{\"title\": \"General response\", \"comment\": \"# General Response\\n\\nWe thank the reviewers for their time and constructive feedback. We have identified two core questions raised throughout the reviews and addressed them below. Additionally, specific clarifications for each reviewer are provided in separate responses. Note that all references in the individual responses can be found at the end of this general response.\\n\\n## Why do our results matter?\\n1. __Anthropomorphizing LMs can be problematic.__\\n Given that LLMs can reduce tasks that were formerly thought to require human intelligence to next-token prediction and improve their performance simply by being instructed to \\\"think step by step,\\\" it is highly tempting to anthropomorphize these models.\\n Consequently, the research community has acknowledged the potential harms of anthropomorphizing LMs, and examining these risks is an active area of research [2, 3, 4, 5, 6].\\n For example, [2] argues that anthropomorphism tends to exaggerate and misrepresent LLM capabilities by attributing human-like attributes onto systems that do not possess them.\\n Moreover, via the same mechanism, anthropomorphism distorts judgments of responsibility and trust in LLMs.\\n\\n2. __Drawing parallels between human cognitive development and LM training is an example of anthropomorphization.__\\n While much of the conversation centers around interactions with _trained_ LMs, anthropomorphization also manifests itself in the way we think and talk about LM training.\\n Indeed, terms like \\\"grokking\\\" [7] and even the commonly used \\\"learning\\\" can evoke a sense of human-like understanding and cognitive processes.\\n While we see no issue with drawing inspiration from cognitive development, discussions suggestive of a deeper connection between the human learning process and stochastic gradient descent should be approached with care. \\n\\n3. __The results of [1] are both influential and can be easily misinterpretted as evidence that human cognitive development and LM training are more related than originally thought.__\\n At the time of writing, Semantic Scholar reports 175 citations of [1], and TinyStories is the 22nd most liked dataset on Hugging Face. Their findings were also widely circulated on social media platforms [10] and was featured by Quanta Magazine [8].\\n Because of [1]'s emphasis on using only words suitable for 3-4-year-old children, it is tempting to attribute their findings to simple, child-directed language rather than the low number of statistical patterns that come with synthesizing a dataset with a template-based prompt with minor variations among the instances of the prompt.\\n This temptation is further fueled by the community's interest in developmentally plausible pre-training [9], as one might reasonably, though mistakenly, interpret the findings of [1] as supporting evidence for this research area.\\n Our goal is to correct this misinterpretation.\\n\\n4. __Citations of [1] suggest that their results are being misinterpreted or present the findings in ways that emphasize the importance of child-directed language.__\\n For example, [11] write \\\"Recent studies have shown the benefits of mimicking human language acquisition. For instance, using child-oriented vocabulary and/or child-directed speech (CDS) as learning data improves learning efficiency.\\\"\\n Many more papers emphasize the simple, child-directed language of TinyStories when describing the dataset [12, 13, 14, 15], further amplifying the temptation to anthropomorphize LM training.\"}", "{\"title\": \"[Reminder] Response to Authors\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is drawing to a close, I would appreciate your response to the authors' rebuttal at your earliest convenience.\\n\\nBest Regards,\\n\\nArea Chair\"}", "{\"summary\": \"This paper investigates the question: are small LMs capable of learning TinyStories because it is *readable* (i.e., simple vocabulary, concepts, and grammatical structures) or some other feature, notably the dataset's lack of diversity (templated sentences with uniform structure)? The authors of TinyStories, and subsequent citations, only consider the former interpretation, but there is no evidence to eliminate the latter.\\n\\nThis paper carefully investigates this question by generating two datasets with the same synthetic data generation process, differing only in the vocabulary and the intended audience that the model is asked to use & consider. They call these two datasets $\\\\\\\\texttt{LlamaTales-Jr}$ and $\\\\\\\\texttt{LlamaTales-GRE}$. The two datasets are equally coherent, but $\\\\\\\\texttt{LlamaTales-Jr}$ is much more readable. They find that small LMs are *equally* capable of learning both $\\\\\\\\texttt{LlamaTales-Jr}$ and $\\\\\\\\texttt{LlamaTales-GRE}$, showing that *readability* does not necessarily explain small LMs' ability to learn TinyStories. Instead, they hypothesize it is the lack of diversity in the data.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is cleanly scoped and clearly written.\", \"It corrects a widespread misinterpretation of a result in the NLP literature. This result has been used to motivate LM development inspired by human language learning.\"], \"weaknesses\": [\"The scope of the paper is relatively narrow. While it shows that the community has widely misinterpreted the results of a particular paper, it's not clear how much it matters. Moreover, I believe the main surprising finding of $\\\\\\\\texttt{TinyStories}$ still stands, which is that SLMs are capable of learning the language of 3-4 year olds (regardless of why).\", \"I believe the overall paper can use some reorganization.\", \"I find it odd that \\u00a73 and \\u00a74 (which are all about measuring the readability and quality of the existing dataset, $\\\\\\\\texttt{TinyStories}$) are ordered before \\u00a75 (about constructing the datasets used in this paper). Wouldn't it make more sense to first describe the data creation methodology, *then* validate that they have the expected readability and quality? Right not, we don't get to the meat of the paper until halfway through page 7.\", \"The connection between figures and claims in the running text of the paper is all over the place. For instance, most of the main claims in \\u00a73 and \\u00a74 are supported by figures in the Appendix.\", \"The presentation of tables and figures can be more readable.\", \"Figures 2, 3, 6 are hard to interpret due to lack of textual explanation, and I think there must be a better way to present the results. My understanding is that in Figure 2, I should see that in (b), the *green* dots (SLMs trained on $\\\\\\\\texttt{LlamaTales-Jr}$) are approximately as high as the best gray dots (LLMs), and in (c), the *blue* dots (SLMs trained on $\\\\\\\\texttt{LlamaTales-GRE}$) are ALSO approximately as high as the best gray dots (LLMs). Wouldn't it be better for these to be on the same axes, so the reader can compare directly whether LlamaTales-GRE is as learnable as LlamaTales-Jr? Subplots (a) for $\\\\\\\\texttt{TinyStories}$ and (d) for $\\\\\\\\texttt{FineWeb}$ should be in the Appendix, since they aren't used to support the main claims. I'm not sure what Figure 3 is doing in the main paper, since it's not discussed in the running text.\", \"Table 1 contains results for many metrics which are not discussed in the running text of the main paper. To prevent reader confusion, I recommend moving the results for these metrics to the Appendix, where the metrics are described. The different metrics also don't seem to tell a different story.\", \"I recommend a table with examples from $\\\\\\\\texttt{LlamaTales-Jr}$ and $\\\\\\\\texttt{LlamaTales-GRE}$.\"], \"questions\": \"I would love to hear the authors' response to my interpretable of the tables / figures, in case there is any misunderstanding.\\n\\nI am open to raising my score if there is a strong argument for why correcting this misunderstanding is important for the community, as it is my main concern about the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the feedback. We believe there has been some misunderstanding in how our paper is being interpreted. Moreover, we think our positions are more aligned than they may initially appear.\\n\\n>The research question is somewhat artificial in the sense that the concept of readability (in humans) concerns the cognitive load of interpreting a text, which is not the same thing as learning a statistical language model from a text.\\n\\nThis argument is actually one of the main motivations for our paper.\\nThere is an assumption being made about the positive influence of simple, child-directed language (which we measure with readability) on LM training and we are correcting it.\\nWhile it might appear obvious that readability should not influence LM training, developmentally plausible pre-training is an active area of research [9].\\nMoreover, the existence of LMs that can reduce tasks that were formerly thought to require human intelligence to next-token prediction and improve their performance simply by being instructed to \\\"think step by step,\\\" makes it highly tempting to anthropomorphize these models.\\nThe findings of [1] can be easily misinterpretted in ways that encourage the anthropomorphization of LMs.\\nOur objective is to prevent such misunderstandings.\\nA more thorough discussion of why our findings matter can be found in our general response.\\n\\n> In particular since readability is usually defined in terms of features related to frequency and length of individual tokens, but the paper does not discuss the influence of tokenization on the learning abilities of language models.\\n\\nWhile we agree that it could be interesting to analyze the influence of tokenization on LM training, we have already dedicated Section 3 to establishing a measure of readability that strongly correlates with human experts (LLM-as-a-Judge). Recall that we also found that this measure correlates better with human experts than classic readability formulas, which are based on word (token) length and sentence length (and some measures like Dale-Chall and Spache implicitly consider word frequency). Therefore, we did not further analyze tokenization given the objectives of our paper.\\n\\n> The paper also contains an experiment that shows that the concept of readability and the concept of text quality (as interpreted in terms of perplexity and coherence) are unrelated, which is exactly what you would expect given the definition of these concepts.\\n\\nThe experiment aimed to validate our measures of readability and text quality (PPL and coherence).\\nA key requirement for these measures is that they remain uncorrelated (this is by definition, as you noted).\\nHowever, since we instruct an LM to generate these scores, we cannot simply assume the outputs will exhibit these properties.\\nThis is why we devoted significant portions of our paper (Sections 3 and 4) to validating these measures before conducting our main experiments.\\n\\n> When measuring quality, you use a set of open models. Why not simply use a state of the art model such as GPT-4o or Claude3.5 instead?\\n\\nWe chose to use open models to ensure reproducibility, a key tenet of good scientific research. While GPT-4o or Claude3.5 offer advanced capabilities, they are accessed through APIs, which can change over time without notice. This lack of transparency can make it challenging to replicate results consistently.\"}", "{\"metareview\": \"This paper investigates whether the ability of small language models (SLMs) to learn from TinyStories is due to the simplicity or readability of the language used in that dataset (as suggested by the original TinyStories authors and many subsequent works). The authors hypothesize that the success of SLMs on TinyStories might instead be attributed to the low diversity of the data, as it was generated using templates with minor variations. To test this, they created new synthetic datasets, LlamaTales-Jr (simple language) and LlamaTales-GRE (complex language), using the same generation process but varying vocabulary and target audience. Their experiments found that SLMs trained on both LlamaTales-Jr and LlamaTales-GRE achieved comparable coherence to SLMs trained on TinyStories, suggesting that readability is not the primary factor. Further experiments with diverse, real-world datasets like FineWeb, Dolma, and SlimPajama showed that SLMs trained on these datasets struggled to generate coherent text. The authors then explored n-gram diversity as a potential explanation, finding that datasets easier for SLMs to learn from (including the LlamaTales datasets) exhibited significantly lower n-gram diversity compared to the harder-to-learn real-world datasets. This leads to the finding that n-gram diversity appears to be a strong correlate with dataset learnability for SLMs. The paper challenges the interpretation that the success of TinyStories is solely due to child-directed language, arguing that it promotes anthropomorphism and misunderstanding of LM learning.\\n\\nThis paper addresses a meaningful assumption in the NLP community - if simple language is easier to learn. The experimental design is sound and the paper is well-written and clearly scoped. The study corrects a potential misinterpretation of influential prior work, particularly regarding the TinyStories paper. The investigation into n-gram diversity provides a plausible alternative explanation for the observed phenomena.\\n\\nDespite the acknowledged value of the empirical evidence, several limitations hinder the paper's impact. The analysis primarily presents findings without deeply investigating the fundamental reasons behind them. Concerns persist regarding potential biases introduced by the LLM-as-a-Judge evaluation method. Furthermore, the paper's focus on a specific misinterpretation may limit its broader impact and generalizability for the wider ICLR community. Critically, the inherent differences between the simplistic synthetic datasets and the complex real-world data like FineWeb limit the conclusiveness of the findings, particularly regarding the nuances of human language. Strengthening the conclusions would necessitate demonstrating that synthetic datasets, while controlling for high n-gram diversity, could still impede the generation of coherent text by SLMs.\\n\\nWhile I appreciate the paper's motivation and the insights it provides, the experimental results don't fully support the current conclusion. Coupled with its relatively limited potential impact as the current framing, I lean towards rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers initially raised concerns about clarity, the breadth of quality metrics, potential evaluation bias, and the overall impact of the findings. The authors responded by adding discussion, restructuring the paper for improved flow, and implementing better cross-referencing between the main text and appendix. Although most reviewers actively participated in the rebuttal process, the decision to recommend rejection stems primarily from the limited broad impact of the work (a point raised by reviewer r6Wo) and the insufficient strength of the experimental results to fully support the stated conclusion (partially mentioned by reviewer wvMq, although they have increased the score, but the area chair still has some concerns regarding this).\"}", "{\"title\": \"[Reminder] Response to Authors\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is drawing to a close, I would appreciate your response to the authors' rebuttal at your earliest convenience.\\n\\nBest Regards,\\n\\nArea Chair\"}", "{\"summary\": \"The authors investigates the impact of training data's readability to the generation abilities of very small language models (SLM). They challenge the claim that training SLMs on simple language is the reason for their ability to generate coherent text. They create synthetic corpora with varying level of readability, and found no impact to the coherence of text generated by SLMs, and also found training on simple language does not lead to earlier development of coherence during training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper tested a very meaningful assumption of whether simple language in training data can lead to better generation abilities of SLMs.\\n2. The readability measurement approaches are comprehensively studied and analyzed.\", \"weaknesses\": \"1. The quality measurement is limited to perplexity and coherence (coherent, according to the llm-as-judge prompt, is considered \\\"well-structured and well-organized\\\", \\\"not just a heap of related information, but should build from sentence to sentence\\\"). The ignorance of other dimensions of quality (for example, as authors also mentioned, clarity and fluency) makes any statements about \\\"generation abilities of SLMs\\\" an overclaim.\\n2. The quality measurement doesn't use any metrics from the original TinyStories paper: grammar, creativity, consistency with the beginning of the story (Eldan & Li, 2023). That makes the results from the two papers in comparable. Because of that, there is no evidence that \\\"SLMs trained on data with substantially more complex language also exhibit the same abilities as those trained on simple language\\\" can also hold the measurement in Eldan & Li (2023).\\n3. While the authors rule out some factors not contributing to coherent SLMs, it is unclear what factors are contributing.\", \"questions\": \"See Weaknesses.\", \"typos\": \"1. line 19: propeties -> properties\\n2. line 86: exihibit -> exhibit\\n3. line 527: thire -> their\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## What explains our results?\\n\\nIn our original draft, we observed that SLMs trained on datasets with either simple language (TinyStories, LlamaTales-Jr) or complex language (LlamaTales-GRE) could generate coherent text comparable to much larger models. However, this was not the case for SLMs trained on FineWeb, our representative of data used to train real-world LLMs. \\n\\nWe hypothesize that the primary difference between the datasets is their _diversity_. FineWeb, derived from web crawl data, encompasses a much broader range of language than our other datasets, which were generated by a LM using template-based prompts with minor variations. To quantify diversity, we propose measuring unique $n$-gram counts. We find that FineWeb contains significantly more unique $n$-grams than TinyStories and LlamaTales. However, this leaves us with only four data points to draw conclusions from. In our revised draft, we present additional correlative evidence suggesting that $n$-gram diversity explains our findings.\\n\\nFirst, we generate three new synthetic datasets, each containing 1 billion tokens: Llamatales-{Sports, History, News}. These datasets are created using the same generation process as LlamaTales-GRE, but with the prompts shown in Figures A23-25 to produce documents outside the domain of short fiction stories. Similar to LlamaTales-GRE, all three datasets feature significantly more complex language than LlamaTales-Jr (as measured by readability, see Table A1), yet they produce coherent SLMs when trained on these datasets. We then examine two new examples of \\\"real\\\" training data for LMs (in addition to FineWeb): 1 billion token samples of Dolma and SlimPajama. Consistent with our experiments with FineWeb, SLMs trained on these datasets are incoherent.\\nExamples from the new datasets are provided in Table A21, and the SLM coherency results are shown in Figure A9. We have results for 33M and 9M parameter SLMs; the other model sizes are still being trained and will be included in the next revision.\\nWe also present example generations by SLMs trained on the new LlamaTales datasets in Table A22.\\n\\nAn examination of the $n$-gram diversity across our nine datasets reveals that easy-to-learn datasets (TinyStories, LlamaTales series) have significantly lower $n$-gram diversity compared to hard-to-learn datasets (FineWeb, Dolma, SlimPajama) for small values of $n$ (see Figures A26 and A29).\\nThis observation is intuitive. Easy-to-learn datasets contain fewer statistical patterns than hard-to-learn datasets, and SLMs have limited capacity to encode these patterns.\\nThus, our evidence suggests that the child-directed language of TinyStories is easy to learn because the data contain fewer patterns (in the form of unique $n$-grams), rather than due to any human concept of language complexity (e.g., grammatical, lexical, syntactic, or conceptual complexity).\\n\\nWe will include this expanded discussion in our next revision.\"}", "{\"comment\": \"Thank you for the constructive feedback. We address your points below.\\n\\n__W1 + Q2__\\nWe believe that the argument boils down to whether one believes that anthropomorphizing LMs is problematic and that misinterpreting the findings of [1] facilitates the anthropomorphization of LMs. We have included a more thorough discussion under \\\"Why do our results matter?\\\" in the general response, and are happy to discuss further.\\n\\n__W2.1__\\nWe actually agree with your recommendation and initially organized the paper that way, but changed the order based on early feedback on our draft.\\nReaders expressed that transitioning from dataset construction to two sections on metric validation before returning to experiments using the datasets we constructed earlier felt disjointed. We will carefully consider both your feedback and that of early readers and explore alternative options to improve the flow of the next revision.\\n\\n__W2.2__ In our revised draft, we restructured the paper such that figures and tables are better aligned to their contexts. Additionally, we added bidirectional links between the main text and appendix figures/tables for easier navigation.\\n\\n__W3.1 + Q1__\\n\\n- > I should see that in (b), the green dots (SLMs trained on LlamaTales-Jr) are approximately as high as the best gray dots (LLMs), and in (c), the blue dots (SLMs trained on LlamaTales-GRE) are ALSO approximately as high as the best gray dots (LLMs).\\n\\n This is the correct interpretation. What we were trying to convey is that SLMs trained on either simple language (TinyStories, LlamaTales-Jr) or complex language (LlamaTales-GRE) generate coherent text comparable to much larger models, which we did not observe for SLMs trained on FineWeb (representative of data used to train real-world LLMs).\\n The red dashed line shows the coherence of the training split of the data used for prompting, i.e., what is achievable if the SLM perfectly reproduces its training data.\\n\\n We have included additional textual explanations to help interpret Figure 2.\\n \\n- Your suggestion to move the plots for TinyStories and FineWeb to the appendix makes sense, given that they are not core to our main claims, and we have implemented this change. We have also aligned the y-axes for Figures 2 and A1-9 to make comparisons easier.\\n\\n- Figure 3 (perplexity vs. parameter count) has been moved to the appendix.\\n\\n__W3.2__ We have moved most of the bottom section of Table 1 (statistics on $n$-grams, words, and syllables) to the appendix.\\n \\n__W3.3__ We have included examples from both datasets in Table A5 of the appendix and ensured that a reference to them is included in the caption of Table 1 and Section 5, where we introduce the datasets.\"}", "{\"title\": \"[Reminder] Response to Authors\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is drawing to a close, I would appreciate your response to the authors' rebuttal at your earliest convenience.\\n\\nBest Regards,\\n\\nArea Chair\"}", "{\"comment\": \"We agree that the lack of correlation between creativity and coherence or perplexity does not imply that creativity is unimportant to quality. What we mean to say is that the way we measured creativity may not be a reliable _standalone_ indicator of text quality.\\nOur reasoning is that creativity is a highly subjective concept, even more so than text quality, and can be interpreted in such a way that even nonsensical text might be considered highly creative from certain perspectives.\\n\\nWe believe Figure A8 provides evidence for our claim. In Figures A8a and A8b, the black dots represent prompt completions for children's stories by publicly available LMs. The creativity scores, as assessed by LLM-as-a-Judge, show that toy models like pythia-70m (the leftmost black dot) receive substantially higher creativity ratings than advanced models like Llama-3.1-70B.\\nWe think that most people familiar with these models would consider the text generated by Llama-3.1-70B to be of significantly higher quality than that produced by pythia-70m.\\n\\nTo make this more concrete, consider a few random generations from these models, along with their creativity and coherence scores. Here is a generation by pythia-70m. The bolded text indicates the prompt.\\n\\n> __Benny loves to play football. He likes to kick the ball and run. Benny wears his favorite jersey with the number 5 on it. \\\\n\\\\nHis dog\\\\'s name is Max. Max likes to run too. Benny and Max play together__. It\\\\'s small game but left a lot of room for a game that could really help. \\\\nMax\\\\'s hand flowers in his hair and he wears the neckline and the collar shoes.\\\\n\\\\nBenny and Max enjoy the mix. He says your hand flowers are going to LOVE to play. His pet is not only adorable. Especially when you can change the colour but because you look a normal. Tards, Benny compares the life on the disk to the sorts of things the wear and wear that he knows better than anybody. \\\\n\\\\nHe also makes music for movement of some kind, like after \\\"He\\\\'s already stuck up at the end when we play.\\\"\\\\n\\\\nBut he has shown that he can pull something up enough to punch the ball of his hand when he\\\\'s on the ground when he\\\\'s thinking that though the clock and bodies won\\\\'t be twisting around.\\\\n\\\\nAnd he also is not only going around in his piano, he also plays movie\\n\\nThis generation received creativity and coherence scores of 70 and 30, respectively. Now, consider the following generation by Llama-3.1-70B.\\n\\n> __Benny loves to play football. He likes to kick the ball and run. Benny wears his favorite jersey with the number 5 on it. \\\\n\\\\nHis dog's name is Max. Max likes to run too. Benny and Max play together__ every day. Sometimes Benny kicks the ball to Max, and Max catches it in his mouth.\\\\n\\\\nBenny and Max like to watch the soccer games on TV. Their favorite teams are Manchester United and Barcelona. \\\\n\\\\nOne day Benny was playing soccer with his friends. He kicked the ball really hard, and it went far away. Max ran after the ball and brought it back to Benny. They were both happy. \\\\n\\\\nBenny likes to play soccer because it is fun. He likes to run and kick the ball. Max likes to play soccer because he can run and catch the ball. Benny and Max are good at soccer, and they like to play together.\\n\\nThe Llama-3.1-70B generation received creativity and coherence scores of 10 and 90, respectively.\\nGiven that the generation by pythia-70m is nearly incomprehensible, we believe that most human annotators, when considering text quality, would rate the generation by Llama-3.1-70B as higher.\"}", "{\"comment\": \"Thank you for your thorough explanations addressing my concerns. I agree with the author and feel that this paper is meaningful to the relevant research community.\\n\\nWhile this paper identifies an interesting problem and provides correlative results on the findings (although not causal, which is fine to me), the scope of the impact of this work is a bit limited for a top venue with a broad audience like ICLR.\\n\\nBased on the new discussions, I raised my score, but I hold my concern that it is unclear how someone outside of this research community could learn and conduct future work from this paper's findings. This work is probably more relevant and suitable for language and linguistic venues.\"}", "{\"comment\": \"Thank you for the constructive feedback. We address your three points below.\\n\\n1. If we understand correctly, the reviewer has two questions: Why do our results matter? And what explains our results?\\n\\n To address the first question, we believe that the argument boils down to whether one believes that anthropomorphizing LMs is problematic and that misinterpreting the findings of [1] facilitates the anthropomorphization of LMs. We have included a more thorough discussion under \\\"Why do our results matter?\\\" in the general response.\\n\\n In response to the second question, our revised draft demonstrates a correlative (though not necessarily causal) relationship between $n$-gram diversity (unique $n$-gram counts) and dataset learnability. For a detailed explanation, please refer to \\\"What explains our results?\\\" in the general response.\\n \\n2. We agree with your points regarding clarity and made the following changes in our revision:\\n - We reorganized the paper to better align figures and tables with their relevant sections. Additionally, we added bidirectional links between the main text and appendix figures/tables for easier navigation.\\n - In addition, we have added references to Tables 5-16 (now A6-20), specifically in Section 6.\\n - We have added reference to Figure 16 (now A37) in the caption for Figure 4. In short, the figure is meant as a companion to Figure 4 to allow the reader to compare $n$-gram novelty across our datasets which is difficult to see in Figure 4.\\n\\n3. We share your concern regarding potential bias when utilizing LLM-as-a-Judge, particularly in the choice of the LLM and prompts.\\n We also want to emphasize that our goal is not to find the optimal prompt. We aim only to identify a prompt that is adequate for advancing our experiments.\\n That said, we have taken several steps to validate our use of LLM-as-a-Judge:\\n \\n - Recognizing that LLM-as-a-Judge might be influenced by the choice of LM, we evaluated a wide range of LMs to see how their scores aligned with non-LLM assessments of readability and text quality. For readability, we tested 21 LMs against human evaluations, as detailed in the correlation matrix in Figure A33. We found several LMs with high human correlation, indicating that LLM-as-a-Judge is reliable for readability assessment, provided the LM is recent and has over 70B parameters.\\n For coherence, we examined 6 LMs, all of which showed strong correlation with our model ranking (Figure A31).\\n\\n - We explored different prompt variations for assessing coherence and readability. \\n One variant instructs the LM to generate an analysis before scoring, unlike our original prompt, which asks for an immediate score. Another variant provides examples of low and high-rated texts with explanations, guiding the LM before scoring.\\n For coherence, all three variants showed a similar correlation with our model ranking and each other.\\n For readability, while all variants correlated strongly with human experts, the ones prompting immediate scoring had the highest correlation.\\n The prompts and correlation matrices are detailed in Figures A10-15 and Figures A35-36, respectively.\\n\\n - In response to reviewer wvMq's feedback, we introduced five new prompts for an LM to evaluate clarity, fluency, creativity, consistency, and grammaticality, detailed in Figures A16-20. All metrics, except for creativity, supported our original findings (Figure A9), showing that our results were not unique to the coherence prompt. Creativity is uncorrelated with our non-LM measure (model ranking), indicating it may not be a reliable text quality indicator.\\n\\n - We used three new prompts to generate three 1B token datasets: LlamaTales-{History, Sports, News}. Similar to LlamaTales-GRE, these new datasets exhibit substantially higher language complexity (as measured by readability) than the children's stories in LlamaTales-Jr. Yet, SLMs trained on these datasets are competitive with much larger models in terms of coherence. The specific prompts are shown in Figures A23-25. This demonstrates that our original findings are not unique to the prompt used to generate LlamaTales-GRE.\"}", "{\"comment\": \"Thank you for your answers. I can see the point of refuting previously published misunderstandings, but I still do not consider the contribution substantial enough to warrant publication at this point.\"}", "{\"comment\": \"Thanks the authors for the detailed responses and additional experiments! I believe most of my questions have been addressed. However, I am not fully convinced by the reasoning for the statement that creativity \\\"might not be a good measure of quality\\\". Specifically, creativity having no correlation with coherence and perplexity doesn't mean it is unimportant to quality -- it can be an orthogonal dimension that is still valid. Do authors have stronger evidence to support that statement?\"}", "{\"comment\": \"We also notice a different but related issue in Figures A8c and A8d, where the black dots represent prompt completions for general fiction stories by publicly available LMs. The creativity scores among these LMs are nearly indistinguishable (except for the smallest LMs, but even then, their scores are still high). We also see that creativity does not exhibit a dynamic range comparable to any other text quality measure. Below is an example generated by pythia-70m.\\n\\n> __In the quaint town of Ravenswood, nestled between the rolling hills of the countryside, a sense of tranquility reigned supreme. The residents, with their warm smiles and gentle mannerisms, embodied the very essence of bonhomie \\u2013 an__ urban legend \\u2013 which felt deeply left by the neighbours\\u2019 general expectation of human nature.\\\\n\\\\n\\u201cA world of hand flowers, fair girls, gulls, checkered lichen shoes,\\u201d is professor of social history and the former yearning to eradicate the dinosaur remains.\\\\n\\\\nThe New World has been seen as a \\u2018feral character\\u2019 by the biologist Marc Orr and an expert on perseverance and habit throughout life \\u2013 in the case of sorts, their significant figures, they often sink into the tough old quarrels and feats of humankind from parochial matron after farm-like periods. Now an obvious museum says that, before people can recreate the glory in dying civilisations (whose hallowed living-rooms make much of this movie \\u2013 there is a pun \\u2013 Cream of the Fairness trilogy \\u2013 and are a wonder for themselves), they al\\n\\nThis pythia-70m generation received a creativity score of 80 and a coherence score of 10. In contrast, here is a generation by Llama-3.1-70B.\\n\\n> __In the quaint town of Ravenswood, nestled between the rolling hills of the countryside, a sense of tranquility reigned supreme. The residents, with their warm smiles and gentle mannerisms, embodied the very essence of bonhomie \\u2013 an__ unspoken camaraderie that bound them together. The chimneys of each house billowed with smoke, a testament to the cozy hearths that illuminated the otherwise cold evenings. As the golden autumn leaves cascaded down from the trees, the townsfolk reveled in the simple pleasures of life, united by their shared appreciation for the beauty of their surroundings. Little did they know, beneath this picturesque veneer, a sinister secret lurked, threatening to shatter their idyllic existence.\\\\nI. The Whispering Winds of Ravenswood\\\\nAs the sun dipped beneath the horizon, casting a scarlet hue across the sprawling landscape, the winds began to stir. At first, it was a gentle whisper, a soft caress against the cheeks of those who strolled along the cobbled streets. However, as the night deepened\\n\\nThis Llama-3.1-70B generation received scores of 85 for creativity and 95 for coherence. As before, we think it is reasonable to say that the pythia-70m generation is of lower quality.\\n\\nIn summary, we agree that creativity can be a useful dimension of text quality, but our findings suggest that it should not be considered independently of the other measures we have examined. Given that creativity is a highly subjective concept, even more so than text quality, it can be interpreted in such a way that even nonsensical text might be considered highly creative from certain points of view.\\n \\nWe thank the reviewer again for their constructive feedback and openness to discussing our work. We are happy to provide further clarification on this topic or any other concerns the reviewer may have.\"}" ] }
4wuvmJRAU4
Interfering with Interference: Blind Shuffling and Superposition for Better Multi-Model Compression
[ "Hangyu Zhou", "Aaron Gokaslan", "Volodymyr Kuleshov", "Bharath Hariharan" ]
We present two complementary random mechanisms to significantly reduce interference when eliminating cross-model redundancy for efficient multi-model serving: _Layer Shuffling_ and _Task Vector Superposition_. They work together to increase the orthogonality among interfering task vectors, forcing them into self-destruction without requiring any post-training learning or optimization. _Layer Shuffling_ randomly reorders layers of each individual models to reduce the alignment between interfering task vectors. While _Task Vector Superposition_ leverages random orthogonal transformations to decorrelate task vectors further. Together, these techniques drastically minimize interference, yielding improved performance across multiple tasks with effectively zero incremental memory cost when incorporating new models. Their data and model-independent nature also allows for seamless on-the-fly addition or removal of models, without requiring any re-computation, making them highly practical for real-world deployment scenarios.
[ "Task Arithmetic", "Superposition", "Model Merging", "Multi-model Compression", "Model Serving" ]
Reject
https://openreview.net/pdf?id=4wuvmJRAU4
https://openreview.net/forum?id=4wuvmJRAU4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yn1aqLmkrF", "qEdTU8dyNL", "nyAwYlsHMF", "kJcDMB3hzn", "jzNhiIjiR1", "d9Olbhrf4b", "XOivr07c8p", "TETlWGLZAs", "NguHFwV0bd", "CD8JWDWD5A", "9LNguegNBy", "8VpPhoYFXg", "4MFGL2paGs", "46i5ULqMGb", "2GNBSCWU6d", "1fYllHP3tY", "1eOxy0p6rG" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732831864257, 1730550306968, 1732767467345, 1732765695156, 1732762269145, 1730675196527, 1734668165498, 1732763209016, 1732765513937, 1732833413916, 1737523916415, 1732762380085, 1732766630856, 1730719784652, 1732763330332, 1732833098552, 1730352768523 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Reviewer_mt5T" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Reviewer_JsQG" ], [ "ICLR.cc/2025/Conference/Submission8535/Area_Chair_kGTW" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Reviewer_gJhr" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Authors" ], [ "ICLR.cc/2025/Conference/Submission8535/Reviewer_jGUu" ] ], "structured_content_str": [ "{\"comment\": \"**[GR3] Dynamics Between Layer Shuffling and Superposition (Reviewer gJhr and jGUu)**\\n\\nReviewers gJhr and jGUu noted that the interaction between layer shuffling and superposition wasn't thoroughly analyzed, particularly their effectiveness variation across different tasks. Following these suggestions, we conducted a detailed analysis in the newly added **Appendix D.2**, which revealed several important insights:\\n1. **Higher Skip Rates Increase Interference**: We introduced \\\"skip rates\\\" where a skip rate of k means only every k-th target layer within repetitive layer sets is shuffled and superposed. Higher skip rates led to increased cosine similarity and decreased performance.\\n2. **Performance Remains Robust at Lower Skip Rates**: Performance maintains stability up to skip rate 2 (Figure 6), suggesting potential for halving memory footprint through selective layer manipulation.\\n3. **Decorrelation Method Impact**: We introduced layer shifting as a deterministic alternative to layer shuffling, where layers move one position deeper with wrap-around. Notably, for GTSRB, TA+Shuffle outperformed TA+Shift despite higher cosine similarity, indicating that the method of achieving orthogonality matters beyond the decorrelation level.\\n4. **Task-Dependent Variations**: While TA+Shift underperformed TA+Shuffle on GTSRB, this pattern reversed for SUN397. The correlation between cosine similarity and accuracy also varied across tasks (Figure 7), highlighting opportunities for task-specific optimization.\\n5. **Cosine Similarity Threshold Hypothesis**: Despite these method and task-specific behaviors, we found that optimal performance consistently occurs at the lowest cosine similarity (Figure 7). This led us to hypothesize a critical cosine similarity threshold below which method selection and task properties become less important. This explains the dominant effectiveness noted by Reviewer jGUu: the STA method achieves near-zero cosine similarity on its own, pushing below this threshold for strong performance. Adding layer shifting (STA+Shift) or shuffling (STA+Shuffle) only marginally reduces the already-low similarity, yielding minimal gains. In contrast, layer shifting/shuffling alone cannot reduce similarity enough to cross this threshold, resulting in inferior performance.\\n\\nWe have highlighted these insights in **Appendix D.2** to help develop more effective interference reduction strategies. We plan to explore more scenarios and expand our cosine similarity threshold hypothesis in the next version of our paper.\\n\\n\\n[1] Wang, K., Dimitriadis, N., Ortiz-Jimenez, G., Fleuret, F., & Frossard, P. (2024). Localizing Task Information for Improved Model Merging and Compression. arXiv preprint arXiv:2405.07813.\\n\\n[2] Ilharco, G., Ribeiro, M. T., Wortsman, M., Gururangan, S., Schmidt, L., Hajishirzi, H., & Farhadi, A. (2022). Editing models with task arithmetic. arXiv preprint arXiv:2212.04089.\\n\\n[3] Tang, A., Shen, L., Luo, Y., Xie, S., Hu, H., Zhang, L., ... & Tao, D. (2024). Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models. arXiv preprint arXiv:2408.10174.\\n\\n[4] Tang, A., Shen, L., Luo, Y., Yin, N., Zhang, L., & Tao, D. (2024). Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. arXiv preprint arXiv:2402.00433.\"}", "{\"summary\": \"The paper introduces two methods, Layer Shuffling and Task Vector Superposition, aimed at reducing interference when compressing multiple fine-tuned models into a single multitask model using task vector arithmetic. Layer Shuffling works by randomly reordering layers in each model before merging, reducing alignment between task vectors. Task Vector Superposition applies random orthogonal transformations to further decorrelate task vectors. Both techniques minimize interference and improve performance across tasks. Experiments with CLIP-ViT, Flan-T5, and GPT-2 show that this approach achieves higher accuracy than vanilla task arithmetic and other baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper makes an observation that individual task vectors are too similar and successfully uses it to reduce task vector interference, leading to better multitask performance in model compression scenarios.\", \"Both proposed techniques operate without needing data, allowing flexible model addition or removal without retraining or optimization.\", \"The method achieves storage reduction compared to keeping individual models.\", \"The approach is shown to improve performance across diverse domains including image classification, text generation, and text classification.\", \"The method enables on-the-fly model integration, allowing seamless \\\"hot-swapping\\\" of models.\", \"The paper is very well written and clearly structured.\"], \"weaknesses\": [\"While the paper compares its method to several baseline techniques, it misses comparison with closely related recent works, particularly Guillermo Ortiz-Jimenez et al.'s work (mentioned in the paper) on task vector manipulation for model merging. Including these comparisons would strengthen the submission.\", \"Although the authors claim minimal memory overhead, additional context matrices and shuffled task vectors nearly double the memory requirement, which may not always justify the marginal performance gains over baselines like SMILE.\", \"LoRA results show that SMILE achieves a better tradeoff between accuracy and memory than the reported combination of the proposed methods.\"], \"questions\": \"Q1: Although randomization offers clear advantages like data independence, would a more systematic approach to orthogonalizing task vectors further improve the performance?\", \"q2\": \"Did you observe any (in-)consistent performance variance due to randomness in shuffling and superposition?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**3. Justifying Our Method on the Performance and Memory Tradeoff**\\n\\nWe appreciate the reviewer\\u2019s insightful feedback regarding the comparison of model sizes and their impact on performance. To clarify the tradeoff between performance and memory usage, we present the average accuracy and memory footprint (in GB) of our method compared to baselines across multiple benchmarks:\\n\\n\\n| | **8xViT-L/14** | **14xViT-L/14** | **20xViT-L/14** | **8xViT-B/32** | **8xFlan-T5-base LoRA** |\\n| ------------------------ | --------------------- | --------------------- | --------------------- | --------------------- | ----------------------- |\\n| **Method** | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) |\\n| Pre-trained | 64.5 (1.59) | 68.1 (1.59) | 65.2 (1.59) | 48.2 (0.56) | 75.7 (1.19) |\\n| Fine-tuned | 94.4 (10.53) | 93.5 (18.18) | 94.2 (25.84) | 90.3 (2.84) | 84.6 (1.25) |\\n| Task Arithmetic | 84.0 (***1.59***) | 79.1 (***1.59***) | 73.8 (***1.59***) | 69.8 (***0.56***) | 77.4 (***1.19***) |\\n| TALL Mask + TA | **94.2** (5.42) | **92.4** (7.34) | **93.2** (9.25) | - | - |\\n| WEMoE | - | - | - | 89.2 (2.27) | - |\\n| SMILE | - | - | - | **89.3** (1.23) | 84.0 (1.21) |\\n| **STA + Shuffle (Ours)** | ***94.3*** (**2.87**) | ***93.0*** (**2.87**) | ***93.5*** (**2.87**) | ***89.9*** (**0.89**) | ***84.4*** (**1.20**) |\", \"key_observations\": \"1. **SoTA Models Use More Memory But Perform Worse**: Compared to SoTA models (TALL-masks [1], WEMoE [4], SMILE [3]), our STA+Shuffle algorithm uses less memory while having higher accuracy.\\n2. **Lightweight Baselines Underperform Despite Lower Memory Usage**: Although Task Arithmetic [2] uses only 55% the memory of our method, they exhibit substantially lower accuracy, especially as the number of tasks increases (e.g., 73.8% on 20xViT-L/14) or when merging LoRA models (e.g., 77.4% on 8xFlan-T5-base LoRA). These two scenarios are highly relevant to real-world applications.\\n3. **Scalability in Real-World Scenarios**: As discussed in **[GR1]**, our method maintains high performance with a constant memory footprint even when more models are merged. This scalability distinguishes our approach from other baselines, making it more suitable for real-world applications where merging dozens of models may be necessary.\\n\\nIn summary, our STA + Shuffle method demonstrates a favorable balance between performance and memory usage, outperforming both larger SoTA models and smaller baselines. This establishes a clear advantage in scenarios requiring efficient memory management without compromising accuracy.\\n\\n\\n[1] Wang, K., Dimitriadis, N., Ortiz-Jimenez, G., Fleuret, F., & Frossard, P. (2024). Localizing Task Information for Improved Model Merging and Compression. arXiv preprint arXiv:2405.07813.\\n\\n[2] Ilharco, G., Ribeiro, M. T., Wortsman, M., Gururangan, S., Schmidt, L., Hajishirzi, H., & Farhadi, A. (2022). Editing models with task arithmetic. arXiv preprint arXiv:2212.04089.\\n\\n[3] Tang, A., Shen, L., Luo, Y., Xie, S., Hu, H., Zhang, L., ... & Tao, D. (2024). Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models. arXiv preprint arXiv:2408.10174.\\n\\n[4] Tang, A., Shen, L., Luo, Y., Yin, N., Zhang, L., & Tao, D. (2024). Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. arXiv preprint arXiv:2402.00433.\"}", "{\"comment\": \"**Q1: Although randomization offers clear advantages like data independence, would a more systematic approach to orthogonalizing task vectors further improve the performance?**\\n\\nWe agree that systematically orthogonalizing task vectors could further enhance performance. As shown in Figure 2 and Table 1, there is a clear correlation between task vector orthogonality and performance, and current vectors are not perfectly perpendicular after shuffling and superposition. An interesting direction for future work would be to maximize orthogonality based on model parameters and analyze the resulting performance changes. However, such a parameter-dependent approach may make it difficult to specify task-specific operations in a memory-efficient way or enable incremental model addition and subtraction. Therefore, we focused on our random algorithm, which effectively addresses these challenges and is well-suited for resource-constrained environments.\\n\\n**Q2: Did you observe any (in-)consistent performance variance due to randomness in shuffling and superposition?**\\n\\nOur analysis of accuracy variance across all variants showed minimal variation, with values below 0.1% (not shown in tables) in most cases. The only exceptions were certain tasks using GPT-2.\\n\\n[1] Wang, K., Dimitriadis, N., Ortiz-Jimenez, G., Fleuret, F., & Frossard, P. (2024). Localizing Task Information for Improved Model Merging and Compression. arXiv preprint arXiv:2405.07813.\\n\\n[2] Tang, A., Shen, L., Luo, Y., Xie, S., Hu, H., Zhang, L., ... & Tao, D. (2024). Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models. arXiv preprint arXiv:2408.10174.\"}", "{\"title\": \"General Responses\", \"comment\": \"We sincerely thank all reviewers for their thoughtful feedback and suggestions. Below, we address the key concerns regarding memory-performance tradeoffs, model scaling, and the interplay between shuffling and superposition. The revisions and corresponding results have been incorporated into the manuscript, with all changes highlighted in blue.\\n\\n\\n**[GR1] Memory Efficiency Clarification (Reviewer JsQG, mt5T, and jGUu)**\\n\\nWe appreciate the reviewers' valid concerns regarding the memory footprint of our methods. To address these concerns, we present new findings demonstrating that our approach requires saving only the pre-trained model and a merged task vector. Merging additional models involves storing just a single 8-bit random seed per model. Our method introduces less than 1s overhead during the forward pass for retrieving each model and maintains high performance. Note this runtime overhead is not dependent on the number of merged models.\\n\\n1. **Compressing Additional Models into 8-bit Random Seeds**\\n\\nOur methods\\u2014random layer shuffling and task vector superposition\\u2014are random operations independent of both data and model parameters. This property allows us to retrieve each merged model by reconstructing the layer shuffling orders and the inverses of the context matrices from a single random seed (takes 8-bit if less than 256 models are merged). Although we double the memory footprint due to the need to manipulate the merged task vector differently for each task, this initial investment remains fixed, regardless of the number of models being merged.\\n\\n2. **Achieving Near-Fine-Tuned Performance with Reduced Memory Usage**\\n\\nOur methods achieve performance close to that of fine-tuned models across all testing scenarios. Thanks to the feedback from Reviewer JsQG, we realized that when merging CLIP-ViT-B/32 models, our methods consume less memory than previously estimated, as we mistakenly counted the text encoder twice. Here is the updated Table 1 (truncated to highlight key comparisons):\\n\\n**Performance comparison on eight image classification tasks with CLIP-ViT-B/32 models**\\n| **Method** | **Avg.(%) \\u2191** | **Bits(Gb) \\u2193** |\\n| ---------------------- | ----------------- | ------------------ |\\n| Pre-trained | 48.2 (53.4) | 0.564 (1.00) |\\n| Fine-tuned | 90.3 (100) | 2.84 (5.03) |\\n| Task Arithmetic | 69.8 (77.2) | ***0.564 (1.00)*** |\\n| WEMoE | 89.2 (98.8) | 2.27 (4.03) |\\n| SMILE | **89.3 (98.9)** | 1.23 (2.20) |\\n| **STA+Shuffle (Ours)** | ***89.9 (99.6)*** | **0.89 (1.58)** |\\n\\nThis updated table demonstrates that our algorithm achieves near-perfect accuracy while using significantly less memory compared to SMILE [3] and WEMoE [4].\\n\\nMoreover, as shown in **[GR2]**, our STA+Shuffle algorithm maintains accuracy close to that of individually fine-tuned baselines when merging 14 or even 20 CLIP-ViT-L/14 models, with a constant memory footprint of less than twice that of a single model. In contrast, the state-of-the-art baseline TALL-masks uses 323% more memory than our method while achieving slightly lower performance when merging 20 CLIP-ViT-L/14 models. Finally, **[GR3]** shows that we can **halve** the current memory footprint by selecting a subset of layers to shuffle and superpose.\\n\\nWhile methods like Task Arithmetic consume approximately 50% less memory than ours, their performance is significantly lower, with accuracy scores resembling those of zero-shot pre-trained models when more and larger models are merged, as noted in **[GR2]**.\\n\\n3. **Minimal Runtime Overhead in Forward Pass for Model Retrieval**\\n\\nTo evaluate the overhead introduced by our method, we conducted experiments using a single random seed to retrieve models with shuffling and superposition applied to all layers. For the CLIP-ViT-B/32 model, the retrieval process executed on an Intel Xeon Gold 6448Y CPU over 10 repetitions yielded an average runtime of 292.70 ms. For the larger CLIP-ViT-L/14 model, the average runtime was 658.19 ms under the same conditions. We anticipate that further optimizations, such as utilizing GPUs, could reduce these runtimes further. As our method shows no increase in memory footprint as more models are merged, these overhead measurements also apply to scenarios where 14 or 20 CLIP-ViT-L/14 models are merged.\\n\\nIn conclusion, the new findings show that our algorithm can achieve near-perfect accuracy with constant memory use and minimal runtime overhead. It outperforms state-of-the-art methods in performance and memory efficiency, offering a simple yet powerful solution for model merging and serving.\"}", "{\"summary\": \"The paper proposes two stochastic mechanisms to improve performance for multi-task model merging by reducing task interference. First, the method takes advantage of the repeating structure of modern neural networks and randomly shuffles the same-module layer across blocks by first showing that the layers are mostly similar in the within-block across tasks. Second, the paper proposes random binary matrices to multiply parameter vectors to further reduce the task vector similarity. During inference, the inverse transforms are applied. The paper performs experiments across diverse benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The method is intuitive and simple. The motivation for both components is well written and properly ablated.\\n2. The method is scalable and memory efficient (modulo the duplication of model parameters) given that it only requires the storage of random seeds to retrieve the final model.\\n3. The experimental results are strong across benchmarks.\", \"weaknesses\": \"1. The method has a limitation that is not discussed a lot, apart from the title: it requires the knowledge of the task id during inference. This needs to be underlined during the comparison with methods such as ties and task arithmetic for fairness.\\n2. lack of forward pass time complexity comparison. The proposed method introduces an overhead in the forward pass: layers need to be reshuffled in the correct order, signs need to be restored and the residual needs to be added to the pre-trained weights. Therefore, there should be a study of how much overhead all these operations incur.\\n3. Missing baselines: Given the parameter increase and the time complexity overhead, the paper should compare with the compression algorithm of [1].\\n4. The paper solely focuses on small models, base variants on ViT and Flan-T5, but the literature uses ViT-L/14 and T5-XXL regularly. It would also be interesting to check the performance of the method as tasks increase, see 14 and 20 task benchmarks from [1]. It would be interesting to also track the forward pass time metrics in the case of larger models.\\n5. L268-269: fix references for benchmarks: the vision one for instance comes from Ilharco et al. and not from FusionBench\\n6. Baselines and their categorization are not explained and the reader cannot understand why PSP ins included given its poor results or what WEMoE and SMILE are on their own category compared to everything else. It would be helpful for the reader to provide a brief description of each method as well as a high level overview of the categories to help the reader understand rather than deferring to the appendix where they are actually not discussed.\\n7. Extremely limited Related work: the quality of the paper is heavily undermined by the lack of proper references and discussion over related work.\\n\\nMinor Comments\\n\\n\\n- L202: ontain \\u2192 obtain\\n- Rephrase informal writing:\\n\\t- L217: \\u201cwhich we expect to be much lower\\u201d\\n\\t- L150: \\u201cbut this balance has generally been tricky to achieve\\u201d\\n\\n[1] Wang, K., Dimitriadis, N., Ortiz-Jimenez, G., Fleuret, F. and Frossard, P., 2024. Localizing Task Information for Improved Model Merging and Compression. *arXiv preprint arXiv:2405.07813*.\", \"questions\": \"1. why is 5.03 the number of Gb attributed to fine-tuned? shouldn\\u2019t it be 8x the pre-trained model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The majority of the reviewers raised serious concerns regarding several aspects such as presentation clarify, methodological justification, empirical evaluation and contribution significance. The authors tried to response positively to those concerns, but failed to change the reviewer\\u2019s opinion. Overall, the paper may have potentials but need be improved a lot. This assertion also coincides with the author rebuttal, which indicates explicitly \\u201cTo address your concern comprehensively, we will incorporate a task classifier to our method in the NEXT VERSION of our paper.\\u201d\", \"additional_comments_on_reviewer_discussion\": \"The authors provided extensive rebuttal to try resolving reviewers\\u2019 concerns, implying that it is better to let the paper go through another round of peer review rather than accepting it at its current form.\"}", "{\"comment\": \"**1. Forward Pass Time Complexity Comparison**\\n\\nThank you for this important question. As detailed in **[GR1]**, we evaluated our method's runtime overhead when using random seeds to retrieve models with shuffling and superposition applied to all layers. The retrieval process on an Intel Xeon Gold 6448Y CPU averaged 292.70 ms for CLIP-ViT-B/32 and 658.19 ms for CLIP-ViT-L/14 over 10 runs. These overheads remain constant regardless of the number of merged models, as our method introduces no additional memory requirements. GPU optimization could further reduce these runtimes. These measurements have been added to the \\\"Amortizable Memory Overhead\\\" section in our revision, and an asymptotic complexity analysis will be included in the next version of the paper.\\n\\n**2. Missing Baselines from TALL-masks**\\n\\nThank you for pointing this out. As noted in **[GR2]**, we've added comparisons with TALL-masks [1] algorithms, specifically TALL-masks + TA and Magnitude Masking. We excluded Magnitude Pruning due to its consistently lower performance and missing implementation. While TALL-masks can be enhanced with orthogonal techniques like TIES [2], we excluded this variant for fair comparison. In future version of our paper, we plan to enhance our approach with techniques like Adamerging [3] and include comparisons with TALL-masks+TIES.\\n\\n**3. Extending Experiments to Larger Models and More Tasks**\\n\\nThank you for your suggestion. We have conducted new experiments merging 8, 14, and 20 ViT-L/14 models as discussed in [GR2]. We will fine-tune and merge even larger models in greater quantities, and include the results in our paper.\\n\\n\\n**4. Discussing Task ID Requirement During Inference for Fair Comparisons**\\n\\nWe acknowledge that our method requires the task ID during inference and will emphasize this in our comparison with other methods. However, we believe the comparison remains fair for the following reasons:\\n1. **Other Baselines Also Require Task IDs**: For example, when merging image classification models, all baselines require a task ID to select the task-specific classification head for inference. This requirement is made clear in Fisher Merging [7] but not explicitly discussed in many other baselines.\\n2. **Task IDs Enable Strong Performance**: Leveraging task IDs allows task-specific retrieval, contributing to our method\\u2019s high performance and low memory footprint. This is a good feature and aligns with the goals of model merging and compression.\\n3. **Task Identification is Context-Dependent**: In completely task-agnostic environments, it\\u2019s possible to infer task ID from the input like WEMoE [5] and SMILE [6]. But in many real-world scenarios where task-specific queries or datasets are naturally tagged with task IDs (e.g., multi-tenant APIs or application-specific deployments), task IDs are easy to come by. The system-specific nature of task identification and its orthogonality to our core contributions have led us to focus on other aspects rather than developing a separate task identification mechanism.\\n\\nTo address your concern comprehensively, we will incorporate a task classifier to our method in the next version of our paper.\\n\\n**5. Providing Detailed Explanations and Categorization of Baseline Methods**\\n\\nThank you for these thoughtful suggestions. We have enhanced the clarity of our work by:\\n* Revising the Experimental Setup section to better explain model categorization\\n* Adding Appendix B with comprehensive details on all baselines, datasets, models, and experimental setups\\n* Clarifying PSP [8]'s role as an online learning model superposition baseline adapted to demonstrate the effectiveness of our task vector superposition approach in the offline setting\\n\\n\\n**6. Expanding and Enhancing the Related Work Section**\\n\\nWe thank the reviewer for this important feedback. We have thoroughly expanded our Related Work section to provide a more comprehensive coverage of model merging and compression literature. The revised manuscript now includes detailed discussions and proper references in these areas. We welcome specific suggestions for any missing critical references to include in the final version.\\n\\n**7. Question: Why is 5.03 the number of Gb attributed to fine-tuned? shouldn\\u2019t it be 8x the pre-trained model?**\\n\\nThank you for this important question. The 5.03x figure for fine-tuned models appears lower than 8x the pre-trained model size because our CLIP models from FusionBench [4] use a frozen text encoder during fine-tuning. Only the vision encoders need to be stored separately for individual fine-tuned models. We have clarified this in the manuscript.\"}", "{\"comment\": \"**1. Including Comparisons with Recent Task Vector Manipulation Methods**\\n\\nWe appreciate the suggestion to compare our algorithm with recent task vector manipulation methods, specifically TALL-masks [1]. We have now evaluated our approach against TALL-masks on merging 8, 14, and 20 ViT-L/14 models. The experiment results and discussion are now included in **[GR2]** and the \\u201cScalability Analysis\\u201d section of our revision.\\n\\n**2. Clarifying Memory Overhead and Performance Tradeoff with SMILE as a Comparison**\\n\\nThank you for raising this important consideration about memory efficiency. As discussed in the \\\"Amortizable Memory Overhead\\\" section of our paper and in **[GR1]**, our method requires only storing one additional task vector instance beyond the pre-trained model. Subsequent models can be merged by saving just their random seeds. This results in a constant memory footprint, which becomes increasingly advantageous when merging more and larger models, as elaborated in **[GR2]**.\\n\\nTo address your concern about whether our memory overhead justifies the performance gains over baselines like SMILE [2], we have prepared a comparison between our method and SMILE across multiple benchmarks. The average accuracy and storage cost (in GB) are presented below:\\n\\n**Performance and Memory Footprint Comparison with SMILE Across Benchmarks**\\n| | **8xViT-B/32** | **8xFlan-T5-base** | **8xFlan-T5-base LoRA** |\\n| ------------------------ | ------------------- | ------------------ | ----------------------- |\\n| **Method** | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) |\\n| Pre-trained | 48.2 (0.56) | 75.7 (1.19) | 75.7 (1.19) |\\n| Fine-tuned | 90.3 (2.84) | 86.4 (9.52) | 84.6 (1.25) |\\n| SMILE | 89.3 (1.23) | 85.5 (**1.81**) | 84.0 (1.21) |\\n| **STA + Shuffle (Ours)** | **89.9** (**0.89**) | **86.4** (2.38) | **84.4** (**1.20**) |\\n\\nOur STA+Shuffle algorithm outperforms SMILE across all benchmarks, while using less memory on the 8xViT-B/32 and 8xFlan-T5-base LoRA tasks. Although we use more memory than SMILE on the 8xFlan-T5-base benchmark, our performance matches that of the fine-tuned baseline, effectively saturating this benchmark. We plan to compare our method with SMILE on more challenging scenarios (e.g., merging 20 ViT-L/14 models) and will include those results in the next version of our manuscript.\\n\\n**3. Comparing Accuracy and Memory Trade-offs with SMILE on LoRA Models**\\n\\nThank you for raising this point. We discovered that in our initial experiment, we had not properly tuned the merging coefficient lambda, instead using a fixed value of 1.0. This was suboptimal compared to coefficients used in other benchmarks. After correcting this oversight, we performed a grid search on the validation set to find the optimal lambda value. This adjustment led to significantly improved results, as shown in the following table:\\n\\n**Multi-task performance when merging Flan-T5-base LoRA models on eight GLUE tasks**\\n| **Method** | **Avg.(%) \\u2191** | **Bits(Gb) \\u2193** |\\n| ---------------------- | ----------------- | ----------------- |\\n| Pre-trained | 75.7 (87.6) | 1.19 (1.00) |\\n| Fine-tuned | 84.6 (100) | 1.25 (1.05) |\\n| Task Arithmetic | 77.4 (91.5) | ***1.19 (1.00)*** |\\n| SMILE | **84.0 (99.3)** | 1.21 (1.02) |\\n| **STA+Shuffle (Ours)** | ***84.4 (99.8)*** | **1.20 (1.01)** |\\n\\nThe updated results demonstrate that our STA+Shuffle algorithm surpasses SMILE [2], achieving higher accuracy while maintaining lower memory requirements. We have incorporated this revised analysis into the manuscript\\u2019s \\u201cParameter Efficient Finetuning (PEFT) Model Compression\\u201d section.\"}", "{\"title\": \"A Brief Follow-up\", \"comment\": \"We have also added a detailed analysis of the interaction between layer shuffling and superposition in **[GR3]** and **Appendix D.2** in the revision file, which provides further empirical insights into their dynamics and task-dependent effectiveness. Thank you again for helping us improve this important aspect of our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"**[GR2] Benchmarking Against TALL-masks on Larger Models and More Tasks (Reviewer JsQG and mt5T)**\\n\\nReviewers JsQG and mt5T requested performance comparisons with recent works, particularly the compression baselines from TALL-masks [1]. Reviewer JsQG also emphasized the need for evaluation on a broader range of tasks (e.g., 14 and 20 tasks) and larger models (e.g., ViT-L/14, T5-XXL). In response, we present the performance of our method on the ViT-L/14 benchmark across 8, 14, and 20 tasks, as provided in TALL-masks [1].\\n\\nThe average accuracy and storage cost (in GB) estimates are presented here. These scores differ slightly from those reported in TALL-masks [1] due to corrections we made to dataset split issues in SUN397, EuroSAT, and DTD. Furthermore, the storage cost figures vary from the previously reported values, as the original estimates were inaccurate.\\n\\n**ViT-L/14 Performance Comparison with 8, 14, and 20 Tasks**\\n| | **8 tasks** | **14 tasks** | **20 tasks** |\\n| ------------------------ | --------------------- | --------------------- | --------------------- |\\n| **Method** | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) | Acc.\\u2191 (Bits\\u2193) |\\n| Pre-trained | 64.5 (1.59) | 68.1 (1.59) | 65.2 (1.59) |\\n| Task Arithmetic | 84.0 (***1.59***) | 79.1 (***1.59***) | 73.8 (***1.59***) |\\n| Magnitude Masking | 92.8 (5.42) | 90.6 (7.34) | 90.9 (9.25) |\\n| TALL-masks + TA | **94.2** (5.42) | **92.4** (7.34) | **93.2** (9.25) |\\n| **STA + Shuffle (Ours)** | ***94.3*** (**2.87**) | ***93.0*** (**2.87**) | ***93.5*** (**2.87**) |\\n| Fine-tuned | 94.4 (10.53) | 93.5 (18.18) | 94.2 (25.84) |\", \"key_observations\": \"1. **High Performance**: Our STA + Shuffle algorithm consistently outperforms alternative methods when merging ViT-L/14 models. Specifically, it achieves performance close to that of individually fine-tuned models, even when merging as many as 14 or 20 ViT-L/14 models.\\n2. **Efficient Storage**: While TALL-masks + TA [1] exhibits a gradual increase in storage costs as more models are merged, our STA + Shuffle algorithm maintains a stable storage cost of less than twice the size of a single model, regardless of the number of models merged. This efficiency is achieved because we only need to store a single random seed for each model. Models can then be retrieved with negligible latency\\u2014292.70 ms for a single ViT-B/32 and 658.19 ms for a single ViT-L/14\\u2014measured over 10 repetitions on an Intel Xeon Gold 6448Y CPU.\\n3. **Comparison to Lightweight Baselines**: Although model-merging baselines like Task Arithmetic [2] use 55% the storage of our method, their performance is inferior, resembling pre-trained performance levels as the number of tasks increases.\\n\\nWe have updated our paper to include these comparisons (see **Section 5.9 Scalability Analysis**), emphasizing our algorithm\\u2019s efficiency and effectiveness in merging larger and more numerous models.\"}", "{\"comment\": \"**1. Improving Presentation and Proof for Layer Shuffling Optimality**\\n\\nWe thank the reviewer for their thoughtful feedback on the existing math analysis and we have removed redundant proofs from the appendix.\\n\\nRegarding the theoretical justification for Layer Shuffling's effectiveness: We recognize the challenge in providing a rigorous proof, as the method's performance inherently depends on learned parameter matrices, which is a product of the learning process. However, our empirical results, particularly in Figure 3(b) of the revision file, demonstrate that intra-model layer variation significantly exceeds inter-model variation for corresponding layers. This observation provides strong empirical support for Layer Shuffling's ability to enhance task vector orthogonality.\\n\\nWe developed this heuristic and used it to establish layer shuffling as a simple and effective algorithm. While we acknowledge there may be better random transformations for task vector decorrelation, and it may be possible to design training procedures to enhance layer shuffling's effectiveness, we look forward to exploring these interesting directions in future work.\\n\\n**2. The Dynamics Between Layer Shuffling and Task Vector Superposition**\\n\\nThank you for your insightful observation. To have a better look at the interaction between layer shuffling and task vector superposition, we curated below the average accuracy of all three variants of our method across all our benchmarks:\\n\\n| | **8xViT-B/32** | **8xFlan-T5-base** | **8xFlan-T5-base-LoRA** | **8xViT-L/14** | **14xViT-L/14** | **20xViT-L/14** | **7xGPT-2** |\\n| ------------------------ | -------------- | ------------------ | ----------------------- | -------------- | --------------- | --------------- | ------------ |\\n| **Method** | Acc.\\u2191 | Acc.\\u2191 | Acc.\\u2191 | Acc.\\u2191 | Acc.\\u2191 | Acc.\\u2191 | Acc.\\u2191 |\\n| Pre-trained | 48.2 | 75.7 | 75.7 | 64.5 | 68.1 | 65.2 | 44.5 |\\n| Fine-tuned | 90.3 | 86.4 | 84.6 | 94.4 | 93.5 | 94.2 | 82.0 |\\n| **TA+Shuffle (Ours)** | 81.3 | 85.7 | **83.9** | 93.0 | 88.8 | 87.1\\u00b10.1 | ***76.7*** |\\n| **STA (Ours)** | **89.6** | ***86.5*** | 83.0 | **94.2** | **92.8** | **93.4** | 71.3\\u00b10.6 |\\n| **STA + Shuffle (Ours)** | ***89.9*** | **86.4** | ***84.4*** | ***94.3*** | ***93.0*** | ***93.5*** | **76.6\\u00b10.2** |\", \"this_reveals_three_key_patterns_in_the_interaction_between_these_two_mechanisms\": \"1. **Complementary Enhancement**: On several benchmarks (8xViT-B/32, 8xFlan-T5-base-LoRA, and 14xViT-L/14), we observe a clear complementary effect. The combined approach (STA + Shuffle) achieves notably higher accuracy than either method alone, demonstrating that the two techniques can work synergistically.\\n2. **Method Dominance**: In other cases, we find that one method may be primarily responsible for the performance gains.\\n3. **The Combined Approach Always Works**: The combined approach (STA + Shuffle) demonstrates remarkable consistency, achieving either the best or statistically indistinguishable from the best performance across all seven benchmarks.\\n\\nThis interplay between the methods suggests a more nuanced relationship than simple additivity. Combining these two approaches does not necessarily lead to an additive effect. But when they are not additive, they are still complementary in that they seem to be more effective for complementary domains. Sometimes one dominates, sometimes the other. And the union always works. \\n\\nWe have a working theory for this phenomenon. Our theoretical analysis (see equation 5 and section 4) shows that both techniques fundamentally work by enhancing orthogonality between task vectors, thereby reducing interference. The \\\"one-side-dominant\\\" benefits we observed in some cases may be due to diminishing returns in orthogonality enhancement: once sufficient orthogonality is achieved through either method, additional increases may yield minimal improvements. We plan to expand this analysis in a future version of our manuscript to provide a more detailed theoretical framework for understanding these interactions.\"}", "{\"summary\": \"This paper introduces two methods, layer shuffling and task vector superposition, aimed at reducing interference between task vectors in multi-model compression scenarios. The proposed methods work by increasing the orthogonality of task vectors, thus minimizing their interference during merging. By leveraging randomization, these methods require no additional training and can be applied across various models and tasks. Experiments on multiple benchmarks, including CLIP, Flan-T5, and GPT-2, demonstrate that this approach can achieve comparable performance to fine-tuned models while reducing storage costs, particularly for real-world deployment scenarios where adding or removing models on the fly is necessary.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Simplicity and Effectiveness: One of the major strengths of this paper lies in its approach\\u2019s simplicity. Layer shuffling and task vector superposition are straightforward yet powerful techniques that effectively reduce interference without needing additional training, optimization, or complex configurations. This simplicity not only enhances the practicality of the approach but also makes it easy to implement and adapt across various multi-model compression tasks, proving that even minimal adjustments can yield significant performance improvements.\", \"Effective Interference Reduction: The combination of layer shuffling and task vector superposition is innovative in addressing interference by increasing orthogonality among task vectors. This approach allows for a more effective merging process, yielding improved model accuracy without the need for additional optimization or training steps.\", \"Adaptability and Scalability: The proposed method\\u2019s flexibility is a clear strength. Its data and model-independent nature enables seamless additions and removals of models (hot-swapping) without re-computation, a valuable feature for dynamic applications. Moreover, the approach is efficient, doubling the memory footprint while providing significant accuracy improvements.\", \"Comprehensive Evaluation: The experiments cover a range of benchmarks and tasks, showcasing the model\\u2019s capability across various domains, from image classification to text generation. This breadth of evaluation helps establish the generalizability of the method across tasks and model architectures.\"], \"weaknesses\": [\"Lack of Detailed Performance Analysis Based on Shuffle/Superposition Levels: It would be useful to analyze the impact of different levels of shuffling and superposition, as these levels could influence task vector similarity and interference differently. This analysis would provide a clearer picture of optimal interference reduction strategies.\", \"Clarity Issues in Method Description: Some aspects of the method, such as the merged task vector formation in equation (8), could benefit from further clarification. Specifically, does shuffling task vectors in different layers cause mixing of task vectors across layers, for instance, between k-1 or k+1? Clarifying this would enhance understanding of how the shuffle affects layer-specific task vector alignment.\", \"Effectiveness Across Tasks: The effectiveness of either TA+Shuffle or STA appears to vary by task, yet the paper does not discuss why some tasks benefit more from specific strategies. A more in-depth analysis here would provide insights into optimizing methods based on task characteristics.\", \"Related Work Reference (PEFT): (Maybe, long shot) this paper is related? \\\"Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals,\\\"\", \"Minor Formatting Issues: There are some minor formatting errors in the document, such as incorrect Latex punctuation and inconsistent reference formatting. For example, equation 3 is mistakenly referenced in the context of equation 4, and parentheses are missing in certain citations. Additionally, clarifying what the values in parentheses mean in tables, such as in the average (%) and Bits (Gb) columns, would be helpful, as it currently requires reading the text to understand that they refer to relative performance to fine-tuned models.\"], \"questions\": \"Please see the above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**8. Minor Fixes**\\n\\n* L268-269: fix references for benchmarks: the vision one for instance comes from Ilharco et al. and not from FusionBench.\\n * We apologize for the mistake, and have fixed it in the revision.\\n* L202: ontain \\u2192 obtain\\n * We have fixed the typo. Thank you.\\n* Rephrase informal writing: L217: \\u201cwhich we expect to be much lower\\u201d\\n * We have refined it to \\u201c\\u201cwhich is expected to be significantly lower due to the reduced alignment of parameter vectors from different layers.\\u201d Thank you.\\n* Rephrase informal writing: L150: \\u201cbut this balance has generally been tricky to achieve\\u201d\\n * We have modified it to \\u201cbut attaining this balance has often been challenging\\u201d. Thank you.\\n\\n\\n\\n[1] Wang, K., Dimitriadis, N., Ortiz-Jimenez, G., Fleuret, F., & Frossard, P. (2024). Localizing Task Information for Improved Model Merging and Compression. arXiv preprint arXiv:2405.07813.\\n\\n[2] Yadav, P., Tam, D., Choshen, L., Raffel, C. A., & Bansal, M. (2024). Ties-merging: Resolving interference when merging models. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Yang, E., Wang, Z., Shen, L., Liu, S., Guo, G., Wang, X., & Tao, D. (2023). Adamerging: Adaptive model merging for multi-task learning. arXiv preprint arXiv:2310.02575.\\n\\n[4] Tang, A., Shen, L., Luo, Y., Hu, H., Du, B., & Tao, D. (2024). Fusionbench: A comprehensive benchmark of deep model fusion. arXiv preprint arXiv:2406.03280.\\n\\n[5] Tang, A., Shen, L., Luo, Y., Xie, S., Hu, H., Zhang, L., ... & Tao, D. (2024). Smile: Zero-shot sparse mixture of low-rank experts construction from pre-trained foundation models. arXiv preprint arXiv:2408.10174.\\n\\n[6] Tang, A., Shen, L., Luo, Y., Yin, N., Zhang, L., & Tao, D. (2024). Merging Multi-Task Models via Weight-Ensembling Mixture of Experts. arXiv preprint arXiv:2402.00433.\\n\\n[7] Matena, M. S., & Raffel, C. A. (2022). Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems, 35, 17703-17716.\\n\\n[8] Cheung, B., Terekhov, A., Chen, Y., Agrawal, P., & Olshausen, B. (2019). Superposition of many models into one. Advances in neural information processing systems, 32.\"}", "{\"comment\": \"**1. Investigating the Impact of Shuffling and Superposition Levels on Performance**\\n\\nWe appreciate this valuable suggestion. Following it, we conducted an extensive analysis of shuffling and superposition levels in the newly added **Appendix D.2**, which has been discussed in **[GR3]**. Our analysis revealed that performance remains robust at lower skip rates, suggesting opportunities for memory optimization. We discovered a critical cosine similarity threshold that helps explain the effectiveness of different interference reduction methods. We hope these findings provide a clearer picture about optimal interference reduction approaches. Thank you for helping us improve our work.\\n\\n**2. Analyzing Task-Specific Effectiveness of Proposed Strategies**\\n\\nThank you for this insightful comment about task-specific effectiveness. As shown in **[GR3]**, we observed interesting task-dependent variations - for example, TA+Shift underperformed TA+Shuffle on GTSRB but showed reversed behavior on SUN397. While our current task set is limited for a comprehensive statistical analysis of task characteristics, we have included initial findings about these variations in **Appendix D.2**. We acknowledge this is an important direction and plan to conduct a more thorough analysis with a larger set of tasks in future work. \\n\\n\\n**3. Clarifying Merged Task Vector Formation and Layer Shuffling Effects**\\n\\nWe appreciate you bringing this to our attention. Yes indeed, layer shuffling can mix layer k-1 from model A with layer k+1 from model B, given that they have the same shape. We will enrich these text descriptions and incorporate them to a future version of our paper. Meanwhile, to help readers understand layer shuffling more easily, we have added a visual illustration in Figure 3 (a) of our revision file.\\n\\n**4. Incorporating Relevant PEFT Literature into Related Work**\\n\\nWe appreciate the reviewer's suggestion regarding this relevant work. We have added a brief discussion of their paper in the updated Related Work section. Our approach shares similarities in that both methods aim to reduce the memory footprint of delta parameters from fine-tuned models. The fundamental difference is that their method focuses on compressing each individual model's residual separately, whereas we leverage the parameter properties of a set of aligned fine-tuned models to compress them collectively, thereby reducing cross-model redundancy.\\n\\nWe would like to compare our method to theirs. However, we are currently unable to reproduce their results due to the inaccessibility of their codebase, as the provided link is broken. We have contacted the authors to request access to their code and will include a comparison with their methods in a future version of our paper if we are able to obtain it.\\n\\n\\n**5. Minor Formatting Issues**\\n\\nThank you for your helpful suggestions! We have addressed all the formatting issues, including correcting the LaTeX equation punctuation and ensuring consistent reference formatting throughout the document. Additionally, we have clarified the values in parentheses within the tables, such as the \\\"average (%)\\\" and \\\"Bits (Gb)\\\" columns, by providing detailed explanations in the table captions, ensuring that the meaning is clear without needing to refer back to the text.\"}", "{\"summary\": \"This paper introduces Layer Shuffling and Task Vector Superposition, two random mechanisms to reduce interference in multi-model compression by increasing orthogonality between task vectors. The methods achieve near-identical accuracy to individual fine-tuned models while reducing storage costs by 4 times and enable seamless hot-swapping of models without recomputation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a novel approach to multi-model compression through random mechanisms\", \"The empirical evaluation is conducted across multiple benchmarks to demonstrate the effectiveness of the method\"], \"weaknesses\": [\"The current presentation and writing require significant improvements. For instance, the mathematical analysis is overly simplistic and does not warrant extensive explanation. Additionally, the proposed method lacks a rigorous proof demonstrating why Layer Shuffling specifically enhances orthogonality more effectively than other potential random transformations.\", \"The interaction between Layer Shuffling and Task Vector Superposition isn't thoroughly analyzed as it's unclear whether they're truly complementary or if one method dominates the benefits\", \"The experiments are not convincing because the models used for comparison are generally much smaller, leading to expected inferior performance from competitors. Meanwhile, the authors' model is significantly larger, resulting in better performance, which does not necessarily demonstrate an advantage.\"], \"questions\": \"see the above weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
4wtcXV0kbi
S7: Selective and Simplified State Space Layers for Sequence Modeling
[ "Taylan Soydan", "Nikola Zubic", "Nico Messikommer", "Siddhartha Mishra", "Davide Scaramuzza" ]
A central challenge in sequence modeling is efficiently handling tasks with extended contexts. While recent state-space models (SSMs) have made significant progress in this area, they often lack input-dependent filtering or require substantial increases in model complexity to handle input variability. We address this gap by introducing S7, a simplified yet powerful SSM that can handle input dependence while incorporating stable reparameterization and specific design choices to dynamically adjust state transitions based on input content, maintaining efficiency and performance. We prove that this reparameterization ensures stability in long-sequence modeling by keeping state transitions well-behaved over time. Additionally, it controls the gradient norm, enabling efficient training and preventing issues like exploding or vanishing gradients. S7 significantly outperforms baselines across various sequence modeling tasks, including neuromorphic event-based datasets, Long Range Arena benchmarks, and various physical and biological time series. Overall, S7 offers a more straightforward approach to sequence modeling without relying on complex, domain-specific inductive biases, achieving significant improvements across key benchmarks.
[ "state space models", "neural network architectures", "deep learning architectures", "sequence modeling", "event-based vision", "event cameras", "neural odes" ]
https://openreview.net/pdf?id=4wtcXV0kbi
https://openreview.net/forum?id=4wtcXV0kbi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "d9eoP37RTA", "QzawM73X5C", "NlbWaK0OxK", "Cd0ZvOXb9J", "6LPVlqKEcx" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730482202956, 1730699740959, 1730695552875, 1730472673995, 1731588606532 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2167/Reviewer_wghn" ], [ "ICLR.cc/2025/Conference/Submission2167/Reviewer_QET1" ], [ "ICLR.cc/2025/Conference/Submission2167/Reviewer_jZ6a" ], [ "ICLR.cc/2025/Conference/Submission2167/Reviewer_s9V1" ], [ "ICLR.cc/2025/Conference/Submission2167/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents S7, a simplified state-space model (SSM) designed for long-range sequence tasks. Building on the S5 model, it introduces input-dependence to allow dynamic adjustments to state transitions based on input content. The paper claims S7 achieves stable reparameterization and efficient performance across diverse tasks, including neuromorphic event-based datasets and Long Range Arena (LRA) benchmarks, without requiring the complexity of models like S4 or S6 (Mamba). The proposed model is argued to balance computational simplicity with adaptive filtering and content-based reasoning.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"*Input-Dependent Dynamics:* S7\\u2019s adaptation of the S5 model to be input-dependent is a promising approach. This could enhance the model\\u2019s responsiveness to input variability, a significant issue in long-range sequence tasks.\\n\\n*Stable Reparameterization:* The model claims to maintain gradient stability over long sequences, addressing gradient explosion/vanishing issues commonly faced in deep learning. This feature has potential benefits for training efficiency and stability.\\n\\n*Broad Applicability:* S7\\u2019s successful application across various domains, from physical time series to neuromorphic vision, suggests it may generalize well to different task types.\", \"weaknesses\": \"**Limited Novelty:** The paper is introducing an input-dependent update mechanism (already introduced by S6) stabilized through the reparameterization framework and key equations (Eqs. 6, 7, and 8) borrowed directly from StableSSM [1], raising concerns about the originality of the contributions. The paper only states that it was \\u201cinspired\\u201d by stable reparameterization, yet much of the core methodology relies on prior work.\\n\\n\\n**Inconsistent Notation:** The notation for $\\ud835\\udc34_k$ is unclear, with dependency on input appearing inconsistently (e.g., it appears in Eq. 5 and line 267 -- e.g. $\\ud835\\udc34_k(u_k, \\\\theta_m)$ but is omitted elsewhere -- e.g. $\\ud835\\udc34_k(\\\\theta_m)$. This lack of uniformity in notation undermines the model\\u2019s theoretical presentation.\\n\\n**Weak Justification for S5 Model Selection:** S5 is mentioned as the basis for S7, but no rationale is provided for not using S6 (Mamba) and the reparameterization technique from StableSSM. Moreover, no connection or description of the S5 model is given (MIMO approach etc.)\\n\\n**Assumptions Clarity:** Assumptions (3.1, 3.2, and 3.3) are not well justified or examined for feasibility, and the text lacks guidance on implementing or verifying these assumptions. This leaves important theoretical aspects of the model unaddressed.\\n\\n**Unclear Contribution of Neuromorphic Design:** The neuromorphic-specific design choices in Section 3.4 seem disconnected from the rest of the model\\u2019s development (no other mention on the first part of the paper). It\\u2019s unclear whether these additions (Eq. 11, 12) apply exclusively to neuromorphic tasks or extend to other benchmark tasks.\\n\\n**Lack of Benchmark Justification:** The paper does not clarify why specific datasets were chosen. For instance, given the input-dependent nature of S7, benchmarks used by similar models like Mamba (e.g., Selective Copy or Induction Heads or other similar benchmarks -- see Section 7/Table 4 of the thematic survey [2]) might have been more appropriate for comparison.\\n\\n**Poor Performance on LRA Benchmarks:** S7\\u2019s subpar performance on LRA benchmarks raises concerns about its applicability to heterogeneous data. The authors provide only a brief discussion, without substantial insight or proposed solutions for improving performance on these challenging tasks.\\n\\n\\n[1] Wang, Shida, and Qianxiao Li. \\\"Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization.\\\" arXiv preprint arXiv:2311.14495 (2023).\\n\\n[2] Tiezzi, Matteo, et al. \\\"State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era.\\\" arXiv preprint arXiv:2406.09062 (2024).\", \"questions\": \"1. **Could the authors clarify the novelty of the reparameterization?** How does it differ from StableSSM\\u2019s reparameterization framework?\\n\\n2. **Why was the S5 model chosen as the basis for S7?** Given that S6 with the StableSSM reparameterization might provide similar benefits, what informed this design choice?\\n\\n3. **Could the authors specify if the neuromorphic-specific design applies solely to neuromorphic tasks or to all benchmarks?** This would improve clarity regarding the model's consistency across different tasks.\\n\\n4. **What is the author\\u2019s perspective on improving the model's performance on data whre time-dependance is note relevant?** Given S7\\u2019s limited success on LRA, is there a feasible modification that could address these challenges while preserving input-dependence?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a new SSM called S7 for sequence modeling. It combines the strengths of S5 (simpler structure) and S7 (input-dependent state transitions) and incorporates stable reparameterization of the state matrix. Many experiments were carried out to verify the efficiency and performance on different datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. It turns out that the combination of S5 and S6 is helpful for SSM structure and contributes to the development of SSMs.\\n2. Experiments are conducted on many datasets together with extensive analysis.\", \"weaknesses\": \"1. The core contribution of this paper seems to be successfully combining S5 and S6, with many experiments being carried out. However, the paper lacks a detailed introduction to the S5 and S6, so readers may not be completely clear about the advantages and disadvantages of these two models, and how S7 surpasses the two. E.g., why to say \\\"S6 introduces hardware-specific complexity\\\" (Line 88)? Some details of S5 and S6 can be shown clearly in 3.1 Background.\\n2. An intuitive comparison of the S4 (S4D or DSS), S5, S6, and S7 schematic can be given, to clearly show the development and difference of SSMs. Or a similar part like 4. RELATIONSHIP BETWEEN S4 AND S5 in S5 paper [1].\\n3. The effectiveness of Stable Reparameterization of S7 seems not to be verified. More introduction to the initialization of the state matrix should be given, or the writing logic of Stable Reparameterization for Long-Term Dependencies (Line 211) is too hard for readers to follow.\\n\\n[1] Smith, J. T., Warrington, A., & Linderman, S. W. (2022). Simplified State Space Layers for Sequence Modeling. ArXiv. https://arxiv.org/abs/2208.04933\", \"questions\": \"1. The S7 should combine the advantages of the S5 and S6, so it needs to be compared with both. E.g., why is it simpler than the S6? Training faster? Fewer parameters?\\n2. Why did the introduction of selective matrix degrade the effectiveness so much on long sequence tasks like path-X in LRA, compared to S5 (Line 402)? More analysis is needed, otherwise the S7's improved sequential modeling capabilities using input-dependent matrix don't seem useful and convincing.\\n3. Prior work showed that the performance of deep state space models are sensitive to the initialization of the state matrix. [1] Have you done experiments with different initializations to verify the robustness of the S7 module? Along as the experiments of effectiveness of Stable Reparameterization for Long-Term Dependencies.\\n\\n[1] Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher R\\u00e9. Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in Neural Information Processing Systems, 34, 2021b.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an SSM (state space model) architecture call S7, to provide an input-dependent mechanism for an existing work (S5), and showing this architecture can efficiently and accurately deal with sequence modeling tasks. The experiments show it performs better than Mamba(S6) in LRA tasks, while worse than other SSM-like models, and show it achieves good performance in neuromorphic datasets and dynamic prediction tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper proposes an input-independent mechanism for SSM, with stable reparameterization techniques, and it provides the stability of this reparameterization through comprehensive theoretical derivations.\", \"weaknesses\": \"1.\\tThe novelty is not very clear to me, as mentioned in the paper, it is claimed that this paper provides efficient input-dependent state transition dynamics based on S5, but conceptually, it is the same as what Mamba (S6) did for S4, that introduced learnable matrices A, B, C\\n2.\\tThe paper claims this S7 is more efficient than Mamba, but I did not find any experiments data on the efficiency comparison with Mamba/Mamba2. Theoretically, without using parallel technologies like selective scan or others, how could one run the S7 in an efficient way when at each time step one needs to update the dynamics of A, B, C, D? \\n3.\\tThe experiments do not well support the claims: the performance of S7 in LRA is substantially worse than many other methods, and there\\u2019s no results showing it\\u2019s better than mamba in general language modeling tasks (which is an important selling point of mamba-like models). This leads to a question: what is the use case of S7?\", \"questions\": \"1.\\tWhat does S7 mean? (is the \\u201d7\\u201d with some particular meaning?)\\nOther questions please see the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel state-space model (SSM) called S7 that introduces input-dependent dynamics to filter and retain relevant input information over extended time horizons. An important part of S7 is a reparametrization trick that ensures training stability. S7 is claimed to reach a balance between performance and model's complexity for processing very long sequences.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The authors evaluate their model on a series of benchmark tasks\", \"The authors provide some theoretical analysis for the training stability\"], \"weaknesses\": \"- Scientific novelty of the manuscript: the authors base their work on the existing S5 model and extend it with the input-gating, that other models, such as Mamba already have demonstrated. In fact, it remains even unclear when reading through the main paper how the input-gating is realized precisely. The reader misses those equations. I.e., How is Lambda_k computed?\\n- The selection of the tasks and in particular the selection of the reference models is a major weakness of the model:\\n 1. since the S7 model is based on the S5 model, it is of paramount importance that one always compares to S5 at least, which the authors do not do for many datasets they considered. For example, this comparison is missing in Table 5, 6 and 7 (wrongly called Figure 2 in the manuscript).\\n 2. looking at the LRA results, one can see that S7 is only in 2 tasks slightly better than S5, but in the remaining tasks it is significantly worse. Moreover, in Table 5, 6 and 7 (wrongly called Figure 2 in the manuscript), the authors don\\u2019t even compare to the S5.\\nthe authors claim to introduce a simpler model than Mambda, but it remains unclear in what regards it is simpler, e.g., if it uses less number of parameters, or less computations, this needs to be demonstrated in the results.\\n- Some minor comments are: The authors are very much overstating their novel contributions with terms such as \\u201cS7 has demonstrated its superiority\\u201d, which is by no means true. Figure 2 on page 10 should probably be Table 7\", \"questions\": [\"Is the code to reproduce the results publicly available?\", \"How is lambda_k computed?\", \"Fig.1 delta_k is not defined in the caption\", \"Line 104-109 this sentence is too long and could be split in 2-3 smaller ones.\", \"What is the performance of other SSMs in Table 5, 6 and 7?\", \"In which regards in the S7 simpler or superior over other SSMs? Number of flops? Number of parameters?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We decided to withdraw the paper and go for the resubmission, where we will also include language modeling experiments.\\n\\nWe want to thank the reviewers for the feedback, which will be considered in the future.\"}" ] }
4wpqmhh05N
The Mutual Information Matrix in Hyperbolic Embedding and a Generalization Error Bound
[ "Yu Geng", "Bing Cheng" ]
Representation learning is a crucial task of deep learning, which aims to project texts and other symbolic inputs into mathematical embedding. Traditional representation learning encodes symbolic data into an Euclidean space. However, the high dimensionality of the Euclidean space used for embedding words presents considerable computational and storage challenges. Hyperbolic space has emerged as a promising alternative for word embedding, which demonstrates strong representation and generalization capacities, particularly for latent hierarchies of language data. In this paper, we analyze the Skip-Gram Negative-sampling representation learning method in hyperbolic spaces, and explore the potential relationship between the mutual information and hyperbolic embedding. Furthermore, we establish generalization error bounds for hyperbolic embedding. These bounds demonstrate the dimensional parsimony of hyperbolic space and its relationship between the generalization error and the sample size. Finally, we conduct two experiments on the Wordnet dataset and the THUNews dataset, whose results further validate our theoretical properties.
[ "Hyperbolic embedding", "Mutual information", "Generalization error bounds" ]
Reject
https://openreview.net/pdf?id=4wpqmhh05N
https://openreview.net/forum?id=4wpqmhh05N
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zbTPhUfWIq", "wLSmoMN0Vg", "ouAknaG92Q", "mAy8n2fqUA", "U4jR676WxS", "NvsVwse8qh", "KiPa7sTT9d", "I0ScuoC7qe", "BuOMgolMjM", "2i44l4yPkY", "2SpIcM3rwo" ], "note_type": [ "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1737523795818, 1730727381913, 1733056475812, 1730471915730, 1733056005946, 1733056310323, 1734667204341, 1733056717343, 1733218973505, 1730632791477, 1730675160949 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6837/Reviewer_pY2V" ], [ "ICLR.cc/2025/Conference/Submission6837/Authors" ], [ "ICLR.cc/2025/Conference/Submission6837/Reviewer_Yd2w" ], [ "ICLR.cc/2025/Conference/Submission6837/Authors" ], [ "ICLR.cc/2025/Conference/Submission6837/Authors" ], [ "ICLR.cc/2025/Conference/Submission6837/Area_Chair_RDpK" ], [ "ICLR.cc/2025/Conference/Submission6837/Authors" ], [ "ICLR.cc/2025/Conference/Submission6837/Reviewer_Wxoe" ], [ "ICLR.cc/2025/Conference/Submission6837/Reviewer_Wxoe" ], [ "ICLR.cc/2025/Conference/Submission6837/Reviewer_kgTB" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper provides both theoretical and empirical analysis of Skip-Gram Negative-Sampling (SGNS) embeddings in hyperbolic space. While SGNS traditionally embeds words and contexts in Euclidean space (Word2Vec), the authors extend this approach to hyperbolic space using Poincar\\u00e9 embeddings. Two types of errors are used to evaluate the embeddings: spatial error, which is influenced by the dimensions and structure of hyperbolic space, and generalization error, which measures the relationship between embedding error and sample size across different spaces. An empirical study of hyperbolic embeddings is conducted on WordNet and THUNews.\\n\\nThe authors investigate how hyperbolic distance relates to mutual information, deriving bounds on both spatial and generalization errors. Furthermore, they demonstrate that the distance, d(w,c), between w and c corresponds to the mutual information between w and c in a hyperbolic space. This finding helps to motivate the use of Poincar\\u00e9 embeddings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides novel insights by studying the mutual information matrix in Skip-Gram Negative-sampling (SGNS) embeddings in hyperbolic space. In particular, demonstrating that distance in hyperbolic embeddings obtained by using SGNS equates to mutual information is an interesting finding that can motivate the use and further study of Poincar\\u00e9 embeddings in NLP. Additionally, the empirical result that hyperbolic embeddings are more unstable during training than their Euclidean counterpart and that more samples are needed to reduce training error can help guide further works in training hyperbolic embeddings.\", \"weaknesses\": \"Overall, the paper is extremely dense and difficult to follow because it provides little motivation or intuition for mathematical notation.\\n\\nI understand that one of the paper's main contributions is to provide a detailed mathematical analysis of the mutual information matrix in hyperbolic embeddings. Still, some detail is unnecessary in the main body of the paper and hinders the reader's ability to read the paper. For example, results such as those in section 3.1 that use straightforward algebraic computations to show that distance approximates mutual information should be moved to the appendix. \\n\\nWhile the paper provides some nice theoretical insights, the methods used for the evaluation of hyperbolic embeddings with Skip-Gram Negative-Sampling are not robust. Using the rank of the restored point-wise mutual information matrix as the sole metric to compare Euclidean hyperbolic embeddings is not particularly interesting. Investigating the performance of hyperbolic embeddings on word similarity tasks, e.g., WordSim-353 or SimLex999, would provide a meaningful quantitative comparison of using embeddings based in different spaces and help motivate the study of static, hyperbolic word embeddings. Further, comparing the performance of classification models that use standard Word2Vec embeddings and hyperbolic Skip-Gram Negative sampling embeddings would provide a much stronger motivation for the paper.\", \"questions\": \"1. 263-264: I don\\u2019t understand the reasons for setting $V_{w} = V_{c} = V$. Can you elaborate more on why this setting is used? If it is common practice, there should be some citations.\\n\\n2. It would be helpful to provide a clear definition of parsimony in Section 3.2\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to express our sincere gratitude for the opportunity to revise our manuscript entitled \\u201cThe Mutual Information Matrix in Hyperbolic Embedding and a Generalization Error Bound\\u201d and for the insightful comments. In response to the questions and concerns you have raised, we provide some answers here. In a new version we have submitted, we have updated some parts of the article's description:\\n1. In the second part of our experiments in Section 4.2, to verify the relationship between the sample size and generalization error as stated in Theorem 2, we compared a 400-dimensional Euclidean space with a 2-dimensional Poincar\\u00e9 space. The reason for this choice is that when the amount of data approaches a very large scale, the errors of both methods are relatively close. At this point, the errors predicted by Theorem 2 are very small, implying that the errors introduced by the space, as discussed in Theorem 1, are close. This allows for a better comparison of the impact of data quantity on the generalization error across different spaces. In our new revised version, we have updated and explained the reasons for this choice.\\n2. In our revised version, we have enhanced the description of our experiments and theorem analysis to aid in understanding the results. Our experiments are mainly divided into four parts. First, we demonstrate that hyperbolic embedding significantly outperforms Euclidean space in recovering the dimensionality of the PMI matrix, reflecting the compressive capability of hyperbolic embedding, known as the parsimony property. Second, we present the dimensionality of the Gramian matrix of hyperbolic embeddings, as shown in Theorem 1. We then analyze the impact of different sample sizes on the embedding training error, verifying the results of Theorem 2, which states that low-dimensional hyperbolic spaces require more samples for training to achieve the same loss as Euclidean spaces. Finally, we discuss the reasons why the computational time for hyperbolic embeddings is longer than for Euclidean spaces, due to the need for the RSGD method and geodesic distance calculations, both of which greatly increase the computational complexity of hyperbolic embedding methods.\\n3. In our new revised version, we have added an explanation for fully trained models, as observed in the second experiment of Section 4.2, where full training indicates the sample size at which the loss converges, meaning the loss no longer changes significantly with an increase in the sample size.\\n4. As described in point 2, we have revised and changed the presentation of the experimental results in tables and curve charts.\"}", "{\"summary\": \"This paper discusses the relationship between the point-wise mutual information matrix and the hyperbolic distance. Furthermore, the authors establish generalization error bounds for hyperbolic embedding. These bounds demonstrate the dimensional parsimony of hyperbolic space and its relationship between the generalization error and the sample size. Experiments on the Wordnet dataset and the THUNews dataset validate the theoretical properties.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1) Connecting hyperbolic embedding with mutual information is interesting.\", \"weaknesses\": \"1) The motivation of connecting hyperbolic embeddings and PMI is unclear. Both are distance measure, hyperbolic distance captures similarity of hierachies, while PMI quantifies the discrepancy between the probability. What do you mean by the \\u201cequivalence between the Gramian matrix in hyperbolic embedding and the dimension of the space\\u201d?\\n2) What is the research questions that you want to answer in the Experiment section? The authors said that the theoretical findings are evaluated by conducted the experiments. However, it is unclear how the experimental results related to the theoretical findings. Which theorems (theorem 1 or 2? ) you want to answer? It would be much clear if the authors list the research questions. What do you really want to evaluete and compare. \\n3) I could not understand what do the tables in the experiment section want to tell us? perhaps the authors want to show some correlation between dimension and mutual information matrix? then it is better to plot it with some line plots.\", \"questions\": \"See my questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to express our sincere gratitude for the opportunity to revise our manuscript entitled \\u201cThe Mutual Information Matrix in Hyperbolic Embedding and a Generalization Error Bound\\u201d and for the insightful comments. In response to the questions and concerns you have raised, we provide some answers here. In a new version we have submitted, we have updated some parts of the article's description:\\n1. We have removed the algebraic computation from Section 3.1 of the main text and moved them to the appendix, while also adding more motivation and intuitive understanding for the substitution of the PMI matrix. Furthermore, we have added motivation and intuitive understanding for Section 3.2 as well. \\n2. The reason we choose the order of the PMI matrix as an indicator is that in the PMI matrix, the mutual information between different words and the context forms vectors that serve as an embedding for the words. For words with similar mutual information vectors, it indicates that they contribute similar information in different contexts, thus showing a high degree of substitutability. However, these vectors are usually of high dimension, so we need to reproduce them through a lower-dimensional representation. As for PMI matrices of higher rank, it indicates a greater degree of differentiation between words, which also preserves more information between words.\\n3. Regarding the first question you raised, we have added the reasons for choosing $ V = V_w = V_c $. Since the size of $ V_c $ grows exponentially with the size of the context window, and the dimension of the $ V_w \\\\times V_c $ matrix is at most $ V_w $, we have chosen the context window to be 1, meaning the context vocabulary is the same as the word vocabulary. At this time, when we use the SGNS training method, we are using a word pair-shaped dataset. Moreover, the distance matrix at this time is a square matrix, which is also easier for following theoretical analysis. When the context window decreases, it may become difficult to distinguish the mutual information vectors corresponding to different words, making it hard to differentiate between word vectors, thus reducing the rank of the PMI matrix.\\n4. For the second question you raised, we have clearly defined the concept of parsimony at the beginning of Section 3.2: Compared to Euclidean space, hyperbolic space can compress the dimensions of the embeddings into a lower-dimensional space.\\n5. Further, we added correlation tests for the WordSim353 dataset. Due to the limitations of the machine we used for embedding training, the Pearson correlation coefficients we obtained on the WordSim353 dataset are relatively low compared to the state of the art.. However, considering that the main goal of our study is to compare and understand the impact of the embedding space on the embeddings, the results we obtained still confirm our theoretical findings. The results we obtained are shown in the table below:\\n| space | dim | corre | loss |\\n|------------|-----|---------|---------|\\n| euclidean | 50 | 0.9997 | 1.6177 |\\n| euclidean | 100 | 0.1132 | 1.5586 |\\n| euclidean | 150 | 0.1187 | 1.5164 |\\n| euclidean | 200 | 0.1228 | 1.4988 |\\n| poincare | 2 | 0.1189 | 0.9214 |\\n| poincare | 4 | 0.1206 | 0.7809 |\\n| poincare | 6 | 0.1302 | 0.7413 |\\n| poincare | 8 | 0.1040 | 0.7915 |\\nFrom the experimental results, we can observe that the loss we obtained is positively correlated with the Pearson correlation coefficient. When we increase the dimensions of Euclidean and Poincare spaces, the training loss decreases, accompanied by an increase in the Pearson correlation coefficient. For the Poincare space, compared to the Euclidean space, a high Pearson correlation coefficient can be achieved with a relatively low number of dimensions. However, when the dimension is equal to 8, due to the insufficiency of training samples, the training of the Poincare embedding is not adequate, resulting in a lower Pearson correlation.\"}", "{\"comment\": \"We would like to express our sincere gratitude for the opportunity to revise our manuscript entitled \\u201cThe Mutual Information Matrix in Hyperbolic Embedding and a Generalization Error Bound\\u201d and for the insightful comments. In response to the questions and concerns you have raised, we provide some answers here. In a new version we have submitted, we have updated some parts of the article's description:\\n1. In our theoretical analysis, we chose a context window length of 1, hence $ V_c $ is the same as $ V_w $, and the samples used are in the form of word pairs. The WordNet dataset, being in the form of word pairs and featuring hierarchical relationships among nouns, is a common dataset in the study of Poincar\\u00e9 embedding methods. Therefore, our work also conducts comparative experiments on WordNet, and to avoid the influence of a single dataset, we have extended our tests to the newer and larger THUNews dataset. Additionally, due to the limitations of our experimental platform, we have not conducted training on larger datasets such as the Google News Corp.\\n\\n2. Second, the bounds we propose in this paper include the bounds on generalization error. Furthermore, to minimize generalization error during training, we employ regularization methods to prevent overfitting, which can be partly attributed to an insufficient amount of training samples. For general word embedding, the complexity of word distribution in natural language often requires an immense amount of data to fit the distribution. To avoid overfitting when fitting the model on insufficient datasets, we do not want the training error to be too small. In this paper, we find that the mutual information vector between a word and its context can effectively represent the information of the word. However, since the context grows exponentially with the increase of the context window in the model, we aim to recover the word-context mutual information matrix through a lower-dimensional embedding as an embedding of the information between these words. Therefore, we focus on the embedding capability of these two methods for a specific PMI matrix on a given word dataset. We are more concerned with whether the resulting word embedding can well reflect the current co-occurrence probability distribution between the word and its context. Moreover, through our theory, we also understand that compared to Euclidean space, hyperbolic embedding methods require a larger sample size for training, thus more prone to overfitting on the same training set.\\n\\n3. In the third part section 4.2, we discuss the complexity of the experimental methods. The dimensionality reduction capability of hyperbolic embedding requires more samples to achieve training errors close to those of Euclidean space. At the same time, due to the more complex calculations involved in using the RSGD method and hyperbolic geodesic distances, the algorithmic complexity of hyperbolic embedding methods is also more complex compared to Euclidean space.\\n\\n4. Further, we added correlation tests for the WordSim353 dataset. Due to the limitations of the machine we used for embedding training, the Pearson correlation coefficients we obtained on the WordSim353 dataset are relatively low compared to the state of the art.. However, considering that the main goal of our study is to compare and understand the impact of the embedding space on the embeddings, the results we obtained still confirm our theoretical findings. The results we obtained are shown in the table below:\\n| space | dim | corre | loss |\\n|------------|-----|---------|---------|\\n| euclidean | 50 | 0.9997 | 1.6177 |\\n| euclidean | 100 | 0.1132 | 1.5586 |\\n| euclidean | 150 | 0.1187 | 1.5164 |\\n| euclidean | 200 | 0.1228 | 1.4988 |\\n| poincare | 2 | 0.1189 | 0.9214 |\\n| poincare | 4 | 0.1206 | 0.7809 |\\n| poincare | 6 | 0.1302 | 0.7413 |\\n| poincare | 8 | 0.1040 | 0.7915 |\\nFrom the experimental results, we can observe that the loss we obtained is positively correlated with the Pearson correlation coefficient. When we increase the dimensions of Euclidean and Poincare spaces, the training loss decreases, accompanied by an increase in the Pearson correlation coefficient. For the Poincare space, compared to the Euclidean space, a high Pearson correlation coefficient can be achieved with a relatively low number of dimensions. However, when the dimension is equal to 8, due to the insufficiency of training samples, the training of the Poincare embedding is not adequate, resulting in a lower Pearson correlation.\"}", "{\"metareview\": \"This paper is concerned with performing Word2Vec-style embeddings in hyperbolic space. The chosen model of hyperbolic space is the Poincare disk. The authors derive some theoretical results that generalize the Euclidean setting and perform some experiments for the hyperbolic version of the algorithm.\\n\\nOne strength is that it is always interesting to know how non-Euclidean variants of Euclidean methods work. This paper joins a line of work performing this task.\\n\\nThere are also a few big weaknesses for this paper. First, it is written in a style that makes it difficult to grasp, even for those who are fairly familiar with these topics. Second, the contribution here appears limited. Hyperbolic word embeddings are not a new topic (see for example Tifrea et al \\u201918). The authors can argue that for the particular combination of Word2Vec-type approach with hyperbolic space there is novelty, but it\\u2019s not clear what the gain is; the experimental findings are similar.\\n\\nThe second weakness is that the experiments are very limited. There are only a couple of datasets. These weaknesses were consistently agreed upon by the reviewers. As a result, this paper is not quite ready for acceptance, but could be the start of a solid work for down the line.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers focused on two aspects: the structure of the presentation and the clarity of the writing, along with the limited experimental results. The authors addressed some of the clarity concerns, but the paper could definitely use another round of improvements.\"}", "{\"comment\": \"We would like to express our sincere gratitude for the opportunity to revise our manuscript entitled \\u201cThe Mutual Information Matrix in Hyperbolic Embedding and a Generalization Error Bound\\u201d and for the insightful comments. In response to the questions and concerns you have raised, we provide some answers here. In a new version we have submitted, we have updated some parts of the article's description:\\n1. Intuitively, a word $ w $ can be represented by a vector composed of the mutual information with all contexts $ c $ in $ V_c $, with a length of $ |V_c| $. This vector can effectively represent the information contained in a word. When two words have similar mutual information vectors with respect to context $ c $, it indicates a high degree of interchangeability in different contexts, which also suggests similarity in meaning. The dimension of the PMI matrix reflects the richness of information represented by different words through mutual information. However, since the number of contexts in $ V_c $ increases exponentially with the size of the chosen context window, if a lower-dimensional embedding of $ w $ can recover the elements of the mutual information matrix $PMI( w, c )$ through some function $ d(w, c) $, it would compress the embedding of $ w $ from $ |V_c| $ dimensions to a lower dimension. By analyzing the optimal solution of the objective function of the SGNS method, we discovered the relationship between the distance $ d(w, c) $ in hyperbolic embeddings and the mutual information between words $ w $ and contexts $ c $, as discussed in Section 3.1. Furthermore, by dividing the encoded $ d(w, c) $ matrix from the PMI matrix into errors introduced by the hyperbolic space and estimation errors caused by the SGNS method, we have defined the error bounds for hyperbolic embedding. The equivalence you mentioned refers to the first theorem we derived, which is the relationship between the dimension of the Gramian matrix of Poincar\\u00e9 embeddings and the dimension of the Poincar\\u00e9 space, the impact of the chosen Poincar\\u00e9 space dimension on estimation error.\\n\\n2. In our revised version, we have enhanced the description of our experiments and theorem analysis to aid in understanding the results. Our experiments are mainly divided into four parts. First, we demonstrate that hyperbolic embedding significantly outperforms Euclidean space in recovering the dimensionality of the PMI matrix, reflecting the compressive capability of hyperbolic embedding, known as the parsimony property. Second, we present the dimensionality of the Gramian matrix of hyperbolic embeddings, as shown in Theorem 1. We then analyze the impact of different sample sizes on the embedding training error, verifying the results of Theorem 2, which states that low-dimensional hyperbolic spaces require more samples for training to achieve the same loss as Euclidean spaces. Finally, we discuss the reasons why the computational time for hyperbolic embeddings is longer than for Euclidean spaces, due to the need for the RSGD method and geodesic distance calculations, both of which greatly increase the computational complexity of hyperbolic embedding methods.\\n\\n3. Taking your suggestions into account, in our revised version, we have added more detailed notes to our tables to explain their content and replaced some tables with line charts to more intuitively present our experimental results.\"}", "{\"title\": \"Keep my score\", \"comment\": \"Thank you for your reply. I would strongly advise you to highlight the changes in the manuscript next time. Otherwise it's extremely hard to track revisions during the short rebuttal period.\\n\\nAfter reading the author responses and other reviews, I decided to keep my score unchanged.\\n\\nHowever, I want to stress that I do see the value in the paper, and encourage authors to give it some more work.\"}", "{\"summary\": \"Hyperbolic embeddings were introduced in the literature as an alternative to the embeddings in Euclidean space. This paper provides an analysis of the skip-gram embedding model in hyperbolic space. The authors offer their take on many dimensions of the hyperbolic embeddings, including their connection to the mutual information matrix, generalization capabilities (with theoretical proof), and required sample size/training stability. Theoretical results are further supported by empirical results on two datasets: Wordnet and THUNews.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"I strongly believe exploring the embedding spaces beyond Euclidean space is crucial for the field\", \"Theoretical and empirical results are provided\", \"Reflection on the advantages (low-dimensionality) and disadvantages (training instability, large sample size etc.)\"], \"weaknesses\": [\"Although it's crucial and interesting to explore various properties of hyperbolic embeddings, they do not exist in a vacuum, so it would be useful to see the performance of the embeddings on downstream tasks\", \"Provided experimental setup and results are hard to follow (see questions)\"], \"questions\": [\"__Questions__:\", \"Why choose 400 Euclidean dimensionality and 2 for Poincare?\", \"Table 1,2,3: I don't really understand the reported numbers (what is the distance function exactly in Table 1? What is the distance in Table 2?). I suggest you give an explicit interpretation of those numbers to make it clear to the reader\", \"There is a conclusion that training with hyperbolic embeddings takes more time and iterations. However, it's unclear from your experiments if Poincare space embeddings can achieve the same loss as Euclidean ones with a higher number of samples (or iterations) (Table 6 and Table 7) or it is still behind the Euclidean embeddings\", \"Lime 418: `Moreover, hyperbolic space requires more than 70,000 samples to achieve adequate training`: what is _adequate_? How do you define it?\", \"__Writing__:\", \"Table 7 has the incorrect title. It's 400-dimensional Euclidean space, not Poincare space\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"the paper proposed to replace the Euclidean embeddings learned in word2vec with Hyberpoblic embeddings specifically with Poincare geometry. The method is straightforward - rather than using dot-product, a Euclidean-space similarity measure, the submission measures the distance between two word vectors on a Poincare disk. However, the evaluation approach puzzles me.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. It directly replaces the distance/similarity measure in learning word2vec, which makes the approach easy to conceptualize.\\n2. Under mild assumptions, the submission provides interesting generalization bounds.\", \"weaknesses\": \"It puzzles me that there are many simple 'real-world'-ish datasets for evaluating learned word embedding, but somehow, the submission doesn't provide any of them. IMO, the submission conducts the study as if the problem is orthogonal to NLP.\\n\\n1. Having an understanding of the sample complexity and how the error bound of the estimation depends on the sample complexity is generally informative, however, in recent years, we have found ourselves in a wacky situation that, for a model to generalize, the training loss just needs to be small, but it doesn't need to be very small, because many plateaus in the loss landscape provide models with good generalization, thus, having a theoretical understanding of the loss function or the error bound becomes somehow outdated.\\n\\n2. A crucial aspect or consideration of learning on massive corpora is the complexity of the algorithm itself, which the submission doesn't mention.\\n\\n3. The submission didn't use common datasets for learning word embeddings, nor does it provide any evaluation on common benchmarks, e.g. SimEval or SentEval.\", \"questions\": \"n/a\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }